Data Engineer

India

Capco

Capco is a global management and technology consultancy dedicated to the financial services and energy industries.

View all jobs at Capco

Apply now Apply later

 

Job Title: Jr. Data Engineer 

About Us

“Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the  British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. 

WHY JOIN CAPCO?

You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry.

MAKE AN IMPACT

Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services.

#BEYOURSELFATWORK

Capco has a tolerant, open culture that values diversity, inclusivity, and creativity.

CAREER ADVANCEMENT

With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands.

DIVERSITY & INCLUSION

We believe that diversity of people and perspective gives us a competitive advantage.

MAKE AN IMPACT

JOB SUMMARY:

  • Position: Jr Consultant
  • Location: Pune / Bangalore/Hyderabad
  • Band: M1/M2 (3 to 7 years)

 Role Description:

Must have Requirements:

  • Pyspark or Scala development and design.
  • Experience using scheduling tools such as Airflow.
  • Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services).
  • Sound knowledge on working Unix/Linux Platform
  • Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL.
  • Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA.
  • Understanding of big data modelling techniques using relational and non-relational techniques
  • Experience on debugging code issues and then publishing the highlighted differences to the development team.

Good to have Requirements:

    • Experience with Elastic search.
    • Experience developing in Java APIs.
    • Experience doing ingestions.
    • Understanding or experience of Cloud design patterns
    • Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

WHY JOIN CAPCO? 

You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer: • A work culture focused on innovation and creating lasting value for our clients and employees • Ongoing learning opportunities to help you acquire new skills or deepen existing expertise • A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients • A diverse, inclusive, meritocratic culture

We offer:

  • A work culture focused on innovation and creating lasting value for our clients and employees
  • Ongoing learning opportunities to help you acquire new skills or deepen existing expertise
  • A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients

 

#LI-Hybrid

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Airflow Ansible APIs Banking Big Data Consulting Consulting firm Data pipelines DevOps ETL Git GitHub Hadoop Java Jenkins Jira Kanban Linux Map Reduce Pipelines PySpark Python Scala Scrum Spark SQL

Perks/benefits: Career development Flat hierarchy

Region: Asia/Pacific
Country: India

More jobs like this