Software Engineer II - Data Engineer

Bengaluru, Karnataka, India

Applications have closed

You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.

As a Software Engineer II at JPMorgan Chase within the Asset & Wealth Management, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.

Job responsibilities

 

  • Executes standard software solutions, design, development, and technical troubleshooting
  • Writes secure and high-quality code using the syntax of at least one programming language with limited guidance
  • Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications
  • Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation
  • Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity
  • Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development
  • Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems
  • Adds to team culture of diversity, equity, inclusion, and respect

 

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 2+ years applied experience
  • Experience as a Data Engineer. Experience with Python and Spark
  • Hands-on with AWS cloud services: EMR, Terraform, Cloudwatch, Redshift. Experience with relational SQL and NoSQL databases
  • Familiarity with Hadoop or suitable equivalent. Good to have knowledge in Data visualization tools like Microsoft BI/Qlikview
  • Create and maintain optimal data workflow architecture. Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Python, Spark and AWS. Build data pipeline for batch as well as real time client data
  • Keep our data separated and secure in AWS regions

 

 

Preferred qualifications, capabilities, and skills

 

  • Superb interpersonal, communication, and collaboration skills.
  • Exceptional analytical and problem-solving aptitude.
  • Outstanding organizational and time management skills.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Architecture AWS Data visualization Engineering Hadoop NoSQL Python QlikView Redshift SDLC Spark SQL Terraform

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this