Lead Software Engineer - AWS, Java/Scala/Python, Spark

Bengaluru, Karnataka, India

Apply now Apply later

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.

As a Lead Software Engineer at JPMorgan Chase within the Data Platform Engineering - Corporate Technology, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

 

Job responsibilities

  • Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Develops secure high-quality production code, and reviews and debugs code written by others
  • Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems
  • Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture
  • Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies
  • Adds to team culture of diversity, equity, inclusion, and respect

 

Required qualifications, capabilities, and skills

  • 5+ years of experience working in big data environment, using AWS, Java /Python / Spark, Scala
  • Hands-on practical experience delivering system design, application development, testing, and operational stability
  • Proficient in one or more object-oriented programming language(s) with expertise in languages such as Scala, Java, Python etc.
  • Experience in Apache Spark for large-scale data processing. 
  • Proficient in application, data, and infrastructure architecture disciplines.
  • Proficient in cloud-native architecture, design and implementation across all systems.
  • Proficient in Event Driven Architecture.
  • Proficient in Application Containerization.
  • Proficient in building applications on Public Cloud (AWS, GCP, Azure) development with AWS experience being strongly preferred.
  • Proficient in building applications for real-time streaming using Apache Spark Streaming, Apache Kafka, Amazon Kinesis etc.
  • Proficient in building on emerging cloud serverless managed services, to minimize/eliminate physical/virtual server footprint.

 Preferred qualifications, capabilities, and skills

  • Proficient in designing and developing data pipelines using Databricks Lakehouse to ingest, enrich, and validate data from multiple sources.
  • Proficient in re-engineering and migrating on-premises data solutions to and for the public cloud.
  • Proficient in implementing security solutions for data storage and processing in the public cloud.
  • Proficient understanding of traditional big data systems, such as Hadoop, Impala, Sqoop, Oozie, Cassandra, Hive, HBase etc.

 

 

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: Agile Architecture AWS Azure Big Data Cassandra Databricks Data pipelines Engineering GCP Hadoop HBase Java Kafka Kinesis OOP Oozie Pipelines Python Scala Security Spark Streaming Testing

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this