Data Engineer, AVP

Gurugram

NatWest Group

NatWest Group - Supporting customers, news, investors and sustainability

View all jobs at NatWest Group

Apply now Apply later

Join us as a Data Engineer

  • We’re looking for someone to build effortless, digital first customer experiences to help simplify our organisation and keep our data safe and secure
  • Day-to-day, you’ll develop innovative, data-driven solutions through data pipelines, modelling and ETL design while inspiring to be commercially successful through insights
  • If you’re ready for a new challenge, and want to bring a competitive edge to your career profile by delivering streaming data ingestions, this could be the role for you
  • We're offering this role at associate vice president level

What you’ll do

Your daily responsibilities will include you developing a comprehensive knowledge of our data structures and metrics, advocating for change when needed for product development. You’ll also provide transformation solutions and carry out complex data extractions.

We’ll expect you to develop a clear understanding of data platform cost levels to build cost-effective and strategic solutions. You’ll also source new data by using the most appropriate tooling before integrating it into the overall solution to deliver it to our customers.

You’ll also be responsible for:

  • Driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to build data solutions
  • Participating in the data engineering community to deliver opportunities to support our strategic direction
  • Carrying out complex data engineering tasks to build a scalable data architecture and the transformation of data to make it usable to analysts and data scientists
  • Building advanced automation of data engineering pipelines through the removal of manual stages
  • Leading on the planning and design of complex products and providing guidance to colleagues and the wider team when required

The skills you’ll need

To be successful in this role, you’ll have an understanding of data usage and dependencies with wider teams and the end customer. You’ll also have experience on Python, pyspark or scala.

We’ll expect you to have experience of ETL technical design, data quality testing, cleansing and monitoring, data sourcing, exploration and analysis, and data warehousing and data modelling capabilities.

You’ll also need:

  • Experience of using programming languages alongside knowledge of data and software engineering fundamentals
  • Good knowledge of modern code development practices
  • Hands on experience on streaming technologies  like Kafka, Spark Streaming
  • Good understanding of Spark processing architecture, EMR
  • Good knowledge on AWS Cloud like S3, Airflow, EMR, Datawarehouse, Snowflake and Cloudwatch

Hours

45

Job Posting Closing Date:

01/11/2024

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow Architecture AWS Data pipelines Data quality Data Warehousing Engineering ETL Kafka Pipelines PySpark Python Scala Snowflake Spark Streaming Testing

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this