Global Data Engineer

Billericay, England, United Kingdom - Remote

epay, a Euronet Worldwide Company

epay leverages a global technology stack to connect brands to consumers in stores, online, via mobile devices or wallets, and through ATMs.

View all jobs at epay, a Euronet Worldwide Company

Apply now Apply later

Overview of the job:

At epay, data is at the core of everything we do. We’ve built a global team of Developers, Engineers, and Analysts who transform complex datasets into actionable insights for both internal stakeholders and external partners. As we continue to scale our commercial data services, we are looking to hire a mid-level Data Engineer (2+ years’ experience) who has worked with Azure and Databricks, and can contribute immediately across pipeline development, categorisation, and optimisation for AI/ML use cases.

You’ll work with diverse datasets spanning prepaid, financial services, gambling, and payments, supporting business-critical decisions with high-quality, well-structured data. While engineering will be your focus, you’ll also need to collaborate across analytics and product functions—comfortable switching between roles to meet team goals.

This role includes occasional global travel and requires flexibility across time zones when collaborating with international teams.

 

This role is remote based however frequent attendance to one of 3 locations is required on a regular basis (Billericay, Essex; Bracknell and Baker Street, London)

The ideal candidate will also need to be able to travel globally when required.

 

Three best things about the job:

  • Be part of a high-performing team building modern, scalable data solutions used globally.
  • Work hands-on with cutting-edge Azure technologies, with a strong focus on Databricks and Python development.
  • Play a key role in evolving epay’s data architecture and ML-enablement strategies.

In the first few months, you would have:

  • Taken ownership of a data pipeline or transformation flow within Databricks and contributed to its optimisation and reliability.
  • Worked across raw and curated datasets to deliver categorised and enriched data ready for analytics and machine learning use cases.
  • Provided support to analysts and financial stakeholders to validate and improve data accuracy.
  • Collaborated with the wider team to scope, test, and deploy improvements to data quality and model inputs.
  • Brought forward best practices from your prior experience to help shape how we clean, structure, and process data.
  • Demonstrated awareness of cost, latency, and scale when deploying cloud-based data services.

 The Ideal candidate should understand they are part of a team and be willing to occupy various roles to allow the team to adjust work more effectively.

Responsibilities of the role:

  • Data Pipeline Development: Build and maintain batch and streaming pipelines using Azure Data Factory and Azure Databricks.
  • Data Categorisation & Enrichment: Structure unprocessed datasets through tagging, standardisation, and feature engineering.
  • Automation & Scripting: Use Python to automate ingestion, transformation, and validation processes.
  • ML Readiness: Work closely with data scientists to shape training datasets, applying sound feature selection techniques.
  • Data Validation & Quality Assurance: Ensure accuracy and consistency across data pipelines with structured QA checks.
  • Collaboration: Partner with analysts, product teams, and engineering stakeholders to deliver usable and trusted data products.
  • Documentation & Stewardship: Document processes clearly and contribute to internal knowledge sharing and data governance.
  • Platform Scaling: Monitor and tune infrastructure for cost-efficiency, performance, and reliability as data volumes grow.
  • On-Call support: Participate in an on-call rota system to provide support for the production environment, ensuring timely resolution of incidents and maintaining system stability outside of standard working hours.

 

Requirements

What you will need:

The ideal candidate will be proactive and willing to develop and implement innovative solutions, capable of the following:

 

Recommended:

  • 2+ years of professional experience in a data engineering or similar role.
  • Proficiency in Python, including use of libraries for data processing (e.g., pandas, pySpark).
  • Experience working with Azure-based data services, particularly Azure Databricks, Data Factory, and Blob Storage.
  • Demonstrable knowledge of data pipeline orchestration and optimisation.
  • Understanding of SQL for data extraction and transformation.
  • Familiarity with source control, deployment workflows, and working in Agile teams.
  • Strong communication and documentation skills, including translating technical work to non-technical stakeholders.

Preferred:

  • Exposure to machine learning workflows or model preparation tasks.
  • Experience working in a financial, payments, or regulated data environment.
  • Understanding of monitoring tools and logging best practices (e.g., Azure Monitor, Log Analytics).
  • Awareness of cost optimisation and scalable design patterns in the cloud.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Architecture Azure Databricks Data governance Data pipelines Data quality Engineering Feature engineering Machine Learning Pandas Pipelines PySpark Python SQL Streaming

Perks/benefits: Career development

Regions: Remote/Anywhere Europe
Country: United Kingdom

More jobs like this