Data Engineer

12 SMITHFIELD STREET:LONDON, United Kingdom

Apply now Apply later

Job Summary

Job Description

What is the opportunity?

We have an exciting opportunity for a Data Engineer to join the team in our London/Newcastle offices.

The successful candidate will work closely with business and technology teams across Wealth Management Europe (WME) to support the ongoing maintenance and evolution of the Data Lakehouse platform, focusing on the ingestion and modelling of new data, and the evolution of the platform itself utilising new technologies to improve performance and accuracy of the data.

What will you do?

  • Responsible for the development and ongoing maintenance of the Data Lakehouse platform infrastructure using the Microsoft Azure technology stack, including Databricks and Data Factory.

  • Manage data pipelines consisting of a series of stages through which data flows (for example, from data sources or endpoints of acquisition to integration to consumption for specific use cases). These data pipelines must be created, maintained, and optimized as workloads move from development to production for specific use cases.

  • Good understanding of SQL and PySpark to create new and modify existing Notebooks, Functions and Workflows to support efficient reporting and analytics to the business.

  • Create, maintain, and develop Dev, UAT and Production environments ensuring consistency.

  • Responsible for using innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity.

  • Competent in using GitHub (or other version control tooling) and in using data and schema comparisons via Visual Studio.

  • Champion for the DevOps process to ensure the latest techniques are being used and that implementation methodologies involving new or changes to existing source code or data structures follow the agreed development and release processes and that all productionised code is adequately documented and reviewed.

  • Identify, source, stage, and model internal process improvements to automate manual processes and optimise data delivery for greater scalability, as part of the end-to-end data lifecycle.

  • Actively engage within the team and wider business areas to foster relationships and develop thought leadership.

  • Follow the established Agile working methodology and collaborate effectively in sprints, meetings, and standups.

  • Be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. Additionally, be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalization techniques in optimally addressing these data requirements.

What do you need to succeed?

Must-have

  • At least two years or more of work experience in data management disciplines including data integration, modelling, optimisation and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks.

  • At least two years of experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative.

  • Strong experience with various Data Management architectures like Data Warehouse, Data Lake, Data Hub and the supporting processes like Data Integration, Governance, Metadata Management

  • Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies

  • Strong experience with popular database programming languages for relational databases (SQL, T-SQL)

  • Knowledge of working with SQL on Hadoop tools and technologies including HIVE, Azure Synapse Analytics (SQL Data Warehouse) and others from an open-source perspective and Azure Data Factory (ADF), Databricks, and others from a commercial vendor perspective.

  • Adept in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization

  • Basic experience in working with data governance/data quality and data security teams and specifically information stewards and privacy and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification. Ability to build quick prototypes and to translate prototypes into data products and services in a diverse ecosystem.

Nice-to-have

  • Knowledge of Terraform

  • Experience with advanced analytics tools for Object-oriented/object function scripting using languages such as Python, Java, C++, Scala, R, and others.

What is in it for you?

We thrive on the challenge to be our best - progressive thinking to keep growing and working together to deliver trusted advice to help our clients thrive and communities prosper.  We care about each other, reaching our potential, making a difference to our communities, and achieving success that is mutual.

  • Leaders who support your development through coaching and managing opportunities.

  • Opportunities to work with the best in the field.

  • Ability to make a difference and lasting impact.

  • Work in a dynamic, collaborative, progressive, and high-performing team.

Agency Notice

RBC Group does not accept agency resumés. Please do not forward resumés to our employees, nor any other company location. RBC Group only pay fees to agencies where they have entered into a prior agreement to do so and in any event do not pay fees related to unsolicited resumés. Please contact the Recruitment function for additional details.

#LI-SS2

Job Skills

Big Data Management, Cloud Computing, Database Development, Data Mining, Data Warehousing (DW), ETL Processing, Group Problem Solving, Quality Management, Requirements Analysis

Additional Job Details

Address:

12 SMITHFIELD STREET:LONDON

City:

London

Country:

United Kingdom

Work hours/week:

35

Employment Type:

Full time

Platform:

WEALTH MANAGEMENT

Job Type:

Regular

Pay Type:

Salaried

Posted Date:

2025-07-01

Application Deadline:

2025-07-16

Note: Applications will be accepted until 11:59 PM on the day prior to the application deadline date above

Inclusion and Equal Opportunity Employment

At RBC, we believe an inclusive workplace that has diverse perspectives is core to our continued growth as one of the largest and most successful banks in the world. Maintaining a workplace where our employees feel supported to perform at their best, effectively collaborate, drive innovation, and grow professionally helps to bring our Purpose to life and create value for our clients and communities. RBC strives to deliver this through policies and programs intended to foster a workplace based on respect, belonging and opportunity for all.

Join our Talent Community

Stay in-the-know about great career opportunities at RBC. Sign up and get customized info on our latest jobs, career tips and Recruitment events that matter to you.

Expand your limits and create a new future together at RBC. Find out how we use our passion and drive to enhance the well-being of our clients and communities at jobs.rbc.com.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Architecture Azure Big Data Databricks Data governance Data management Data Mining DataOps Data pipelines Data quality Data warehouse Data Warehousing DevOps Engineering ETL GitHub Hadoop Java Open Source Pipelines Privacy PySpark Python R RDBMS Scala Security SQL Terraform T-SQL

Perks/benefits: Career development Startup environment Team events

Region: Europe
Country: United Kingdom

More jobs like this