Lead Data Engineer

Bengaluru, Karnataka, India

Apply now Apply later

Our Data Engineering team is at the heart of this venture, focused on getting smart ideas into the hands of our customers. We're looking for people who have a curious mindset, thrive in collaborative squads, and are passionate about new technology. By their nature, our people are also solution-oriented, commercially savvy and have a head for fintech. We work in tribes and squads that focus on specific products and projects – and depending on your strengths and interests, you'll have the opportunity to move between them.

As a Lead Data Engineer at JPMorgan Chase within the International Consumer Bank, you will be a part of a flat-structure organization. Your responsibilities are to deliver end-to-end cutting-edge solutions in the form of cloud-native microservices architecture applications leveraging the latest technologies and the best industry practices. You are expected to be involved in the design and architecture of the solutions while also focusing on the entire SDLC lifecycle stages.

While we’re looking for professional skills, culture is just as important to us. We understand that everyone's unique – and that diversity of thought, experience and background is what makes a good team, great. By bringing people with different points of view together, we can represent everyone and truly reflect the communities we serve. This way, there's scope for you to make a huge difference – on us as a company, and on our clients and business partners around the world

Job responsibilities:

  • Work on AWS cloud, S3, AWS glue catalog, AWS Athena and Spark to publish the data right
  • Create plans for the development and delivery of product data to support strategic business objectives, business operations, advanced analytics, and metrics and reporting.
  • Work with key partners to drive an understanding of the data and its use within the business. Provide subject matter expertise with respect to the content and use of data in the product and associated business area.
  • Identify the scope of critical data within their product (inflows/outflows and key dependencies), ensuring that the prioritized data is well-documented as to its meaning and purpose, and classified accordingly with metadata to enable its understanding and control
  • Support the aligned Data & Analytics lead for their product by identifying data required to be integrated into analytics platforms to support analytics projects.
  • Document requirements for the accuracy, completeness, and timeliness of data within the product, and coordinate resources to deliver data quality requirements. Develop processes and procedures to identify, monitor, and mitigate data risks for data in the product, including risks related to data protection, data retention and destruction, data storage, data use, and data quality

Required qualifications, capabilities and skills:

  • Formal training or certification in Data Engineering and 5+ years applied experience
  • Professional experience working in an agile, dynamic and customer facing environment
  • Recent hands-on professional experience (actively coding) working as a data engineer (back-end software engineer considered)
  • Understanding of distributed systems and cloud technologies (AWS, GCP, Azure, etc.)
  • Understanding of data streaming and scalable data processing frameworks (Kafka, Spark Structured Streaming, Flink, Beam etc.)
  • Experience with SQL (any dialect) and Data tools (ie. Dbt)
  • Experience in the all stages of software development lifecycle (requirements, design, architecture, development, testing, deployment, release and support)
  • Experience with large scale datasets , data lake and data warehouse technologies on at least TB scale (ideally PB scale of datasets) with at least one of {BigQuery, Redshift, Snowflake}
  • Experience in Infrastructure as Code (ideally Terraform) for Cloud based data infrastructure
  • Extensive knowledge of Python or good experience with using a JVM language (Java/Scala/Kotlin, preferably Java 8+) 

Preferred qualifications, capabilities and skills

  • Experience with a scheduling system (Airflow, Azkaban, etc.)
  • Understanding of (distributed and non-distributed) data structures, caching concepts, CAP theorem
  • Understanding of security frameworks / standards and privacy
  • Desired – experience in automating deployment, releases and testing in continuous integration, continuous delivery pipelines
  • A solid approach to writing unit level tests using mocking frameworks, as well as automating component, integration and end-to-end tests
  • Experience with containers and container-based deployment environment (Docker, Kubernetes, etc.)

JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management.

We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  1  0

Tags: Agile Airflow Architecture Athena AWS AWS Glue Azkaban Azure Banking BigQuery Data quality Data warehouse dbt Distributed Systems Docker Engineering FinTech Flink GCP Java Kafka Kubernetes Microservices Pipelines Privacy Python Redshift Scala SDLC Security Snowflake Spark SQL Streaming Terraform Testing

Region: Asia/Pacific
Country: India

More jobs like this