Senior Software Engineer - Big Data Developer

111443-IND-HYDERABAD-INTL HYD WF CENTRE BLK B8 Twr-4, India

Wells Fargo

Committed to the financial health of our customers and communities. Explore bank accounts, loans, mortgages, investing, credit cards & banking services»

View all jobs at Wells Fargo

Apply now Apply later

About this role:

Wells Fargo is seeking a Senior Software Engineer We believe in the power of working together because great ideas can come from anyone. Through collaboration, any employee can have an impact and make a difference for the entire company. Explore opportunities with us for a career in a supportive environment where you can learn and grow.


In this role, you will:

  • Lead moderately complex initiatives and deliverables within technical domain environments
  • Contribute to large scale planning of strategies
  • Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments
  • Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures
  • Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements
  • Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals
  • Lead projects and act as an escalation point, provide guidance and direction to less experienced staff


Required Qualifications:

  • 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education


Desired Qualifications:

  • 4+ years of experience in building end-to-end business solutions using Big data technologies like HDFS, Hive, Kafka, Scala, Python and Spark.
  • Demonstrated strength in data modeling, ETL development, and data warehousing. 
  • Preferring knowledge and hands-on experience working with Hadoop Ecosystem and Big data technologies like HDFS, Hive, Kafka, Spark, Scala and Python. 
  • Experience in API design & development.
  • Candidate should have good knowledge of Linux and have ETL development, deployment and optimization experience using standard big data tools. 
  • Should have good understanding of Git, JIRA, Change / Release management,  build/deploy, CI/CD & Share Point.
  • Continually develop depth and breadth in key competencies. 
  • Demonstrate curiosity towards learning and treat negative events as opportunities for learning.  
  • Ability to communicate clearly and concisely and use strong writing and verbal skills to communicate facts, figures, and ideas to others. 
  • Deliver effective presentations and talks.

Job Expectations:

  • Team member will be working as an Individual contributor in delivery team working for data pipeline creation, data onboarding and support for Data Streams within RDS.
  • Design and develop highly scalable applications and research technologies to solve complex business problems.
  • Develop reusable solutions that can be shared with multiple groups.
  • Define opportunities across IT to maximize business impact and innovate engineering processes to reduce software construction and maintenance costs. 
  • Expected to contribute towards integrating complex platforms including several components with business domain and process context. 
  • Focus on building relevant engineering and business capabilities in the organization to keep pace with demand and best practices in the industry. 
  • Coordinate implementation activities across a broad range of functions and departments; work with client groups to identify, arrange, and/or deliver training needs. 
  • Lead organizational initiatives. Work with stakeholders to research new frameworks, tools & proof of concepts.  
  • Develop and lead focused groups and communities to facilitate technical discussions, source ideas, and provide engineering leadership.
  • 4+ years of experience in building end-to-end business solutions using Big data technologies like HDFS, Hive, Kafka, Scala, Python and Spark.
  • Demonstrated strength in data modeling, ETL development, and data warehousing. 
  • Preferring knowledge and hands-on experience working with Hadoop Ecosystem and Big data technologies like HDFS, Hive, Kafka, Spark, Scala and Python. 
  • Experience in API design & development.
  • Candidate should have good knowledge of Linux and have ETL development, deployment and optimization experience using standard big data tools. 
  • Should have good understanding of Git, JIRA, Change / Release management,  build/deploy, CI/CD & Share Point.
  • Continually develop depth and breadth in key competencies. 
  • Demonstrate curiosity towards learning and treat negative events as opportunities for learning.  
  • Ability to communicate clearly and concisely and use strong writing and verbal skills to communicate facts, figures, and ideas to others. 
  • Deliver effective presentations and talks.

Posting End Date: 

10 Jun 2025

*Job posting may come down early due to volume of applicants.

We Value Equal Opportunity

Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic.

Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit’s risk appetite and all risk and compliance program requirements.

Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process.

Applicants with Disabilities

To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo.

Drug and Alcohol Policy

 

Wells Fargo maintains a drug free workplace.  Please see our Drug and Alcohol Policy to learn more.

Wells Fargo Recruitment and Hiring Requirements:

a. Third-Party recordings are prohibited unless authorized by Wells Fargo.

b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: APIs Big Data CI/CD Data Warehousing Engineering ETL Git Hadoop HDFS Jira Kafka Linux Python Research Scala Spark

Perks/benefits: Career development Team events

Region: Asia/Pacific
Country: India

More jobs like this