Data Engineer

Bangalore, India

NatWest Group

NatWest Group - Supporting customers, news, investors and sustainability

View all jobs at NatWest Group

Apply now Apply later

Join us as a Data Engineer

  • This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences
  • You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure
  • Participating actively in the data engineering community, you’ll deliver opportunities to support the bank’s strategic direction while building your network across the bank
  • We're offering this role at associate level

What you'll do

As a Data Engineer, you’ll play a key role in delivering value for our customers by building data solutions. You’ll be carrying out data engineering tasks to build a scalable data architecture including carrying out data extractions, transforming data to make it usable to analysts and data scientists, and loading data into data platforms.

We’ll also expect you to be:

  • Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development
  • Building automated data engineering pipelines through the removal of manual stages
  • Working closely with core technology and architecture teams in the bank to build data knowledge and data solutions
  • Developing a clear understanding of data platform cost levers to build cost effective and strategic solutions

The skills you'll need

To be successful in this role, you’ll need to be an entry level programmer and Data Engineer with a qualification in Computer Science or Software Engineering. You’ll also need a good understanding of data usage and dependencies with wider teams and the end customer, as well as a proven track record in extracting value and features from large scale data.

You'll have experience using Python, PySpark, Scala, Hadoop, and Spark, ideally with Neo4J or another Graph database, Hive, Sqoop, Map Reduce, Flume, Kafka, and Oozie. Tableau, QlikView, and AWS would also be beneficial.
 

As well as this, you'll demonstrate:

  • Good critical thinking and proven problem solving capabilities
  • Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling
  • Extensive experience using RDMS, ETL pipelines, Hadoop and SQL
  • A good understanding of modern code development practices

Hours

45

Job Posting Closing Date:

22/01/2025

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Architecture AWS Computer Science Data quality Data Warehousing Engineering ETL Hadoop Kafka Map Reduce Neo4j Oozie Pipelines PySpark Python QlikView Scala Spark SQL Tableau Testing

Region: Asia/Pacific
Country: India

More jobs like this