Senior Lead Software Engineer - AWS, Python, Pyspark, Databricks

Bengaluru, Karnataka, India

Apply now Apply later

Be an integral part of an agile team that's constantly pushing the envelope to enhance, build, and deliver top-notch technology products.

As a Senior Lead Software Engineer at JPMorgan Chase within the Finance Planning and Analytics Services - Data Platform Team, you are an integral part of an agile team that responsible for formulating and executing strategies for effective utilization of data lake, designing and implementing data governance policies  a overseeing he development and optimization of ETL processes for seam less Data Integration and implementing consumption strategy for data lake consumption. Drive significant business impact through your capabilities and contributions, and apply deep technical expertise and problem-solving methodologies to tackle a diverse array of challenges that span multiple technologies and applications.

Job responsibilities

 

  • Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors
  • Develops secure and high-quality production code, and reviews and debugs code written by others
  • Drives decisions that influence the product design, application functionality, and technical operations and processes
  • Serves as a function-wide subject matter expert in one or more areas of focus
  • Actively contributes to the engineering community as an advocate of firmwide frameworks, tools, and practices of the Software  Development Life Cycle
  • Influences peers and project decision-makers to consider the use and application of leading-edge technologies
  • Adds to the team culture of diversity, equity, inclusion, and respect

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 5+ years of Data Management, Data Integration, Data Quality, Data Monitoring  and Analytics with hands on experience in cloud-based solutions (AWS)
  • Hands-on practical experience as a data engineer with proficiency in Python and big data technologies such Apache Spark/ Pyspark
  • Hands-on practical experience delivering system design, application development, testing, and operational stability
  • Hands-on experience in preparing/integrating the datasets to match to the reporting requirements.
  • Expertise in SQL, Data Warehousing & Business Intelligence concepts
  • Experience using any database system (both SQL and NOSQL) and experience in creating/maintaining scalable database load process with hands-on experience in framing up Complex SQL Queries and ensuring optimal data storage and retrieval
  • Experience in using Databricks for big data analytics and processing
  • Experience in tackling design and functionality problems independently with little to no oversight
  • Experience with Data Orchestrator tool like Airflow, Data Integration tools like Apache NIFI
  • Strong hands-on experience with containerization technologies like Docker and Kubernetes (EKS).
  • Strong fundamentals in data structures, caching, multithreading, messaging and asynchronous communication.
Preferred qualifications, capabilities, and skills
  • Experience in leveraging cloud services for data storage, processing and analytics.
  • Working knowledge on Data Management/Data Quality Rules development will be a plus.
  • Exposure to any BI tools especially Alteryx, Tableau, BO is added advantage.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Agile Airflow AWS Big Data Business Intelligence Data Analytics Databricks Data governance Data management Data quality Data Warehousing Docker Engineering ETL Finance Kubernetes NiFi NoSQL PySpark Python SDLC Spark SQL Tableau Testing

Region: Asia/Pacific
Country: India

More jobs like this