Data Engineer (AI-Driven Pipelines & Research)

Manchester, UK

Apply now Apply later

Data Engineer (AI-Driven Pipelines & Research)

Department: IT & Change

Employment Type: Permanent - Full Time

Location: Manchester, UK


Description

Our Data Engineer will help to build and scale the data infrastructure that powers our AI products. This role is hands-on and technically deep — ideal for someone who cares about data quality, robustness, and automation. Working closely with AI engineers to design pipelines that do more than move data — they clean, enrich, and understand it, increasingly using large language models and agents to automate complex steps in the process. 

About the Team

This role will be part of a team that is flat-structured, best-idea-wins culture and where engineers shape product direction. We operate a supportive culture that values ownership as we want people who take responsibility but aren’t afraid to ask for help where needed. Whilst our offices and extend teams are based in Manchester and London, we also offer flexibility to work from anywhere (UK) for this role — though we’re Europe-focused and love getting together for hackathons and team problem-solving when it matters. 

About the role

  •  Building and maintaining data pipelines in Python, with a focus on reliability, transparency, and scale. 
  • Using LLMs to assist with data cleansing, enrichment, classification, and contextual tagging.
  • Experimenting with AI agents to automate complex research tasks and structured data extraction. 
  • Working with product and AI engineering teams to feed trustworthy data into fast-moving prototypes.
  • Designing workflows that transform noisy, semi-structured data into actionable insight.
  • Supporting experimentation and iteration — shipping fast and learning from what works. 

What we're looking for & more

  •  Strong proficiency in Python and pandas (or Polars), and a track record of delivering working data systems. 
  • Experience with common data formats (JSON, XML, CSV) and transforming unstructured data. 
  • Familiarity with modern cloud-native tooling (we use AWS — especially Lambda and Step Functions). 
  • Interest or experience in using LLMs for tasks like data enrichment or transformation. 
  • A mindset that treats pipelines as products — robust, debuggable, and always improving.
  • Curiosity about how AI can go beyond the model — helping automate research and discovery. 
It would be beneficial (not essential) if you have experience with tools like LangChain, Haystack, Pandas AI, or vector databases as well as any prior projects involving agents for data understanding or research automation.

Sound like you? Great! Whilst a CV tells us part of your story, we would love to see a short summary about you with any relevant links to Loom - following this we will reach out to you for a teams interview if suitable.

Whilst this position can be done from anywhere in the UK, you must already hold the relevant right to work in the UK as we unfortunately can't provide sponsorship for the role.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: AWS Classification CSV Data pipelines Data quality Engineering Haystack JSON Lambda LangChain LLMs Pandas Pipelines Python Research Step Functions Unstructured data XML

Regions: Remote/Anywhere Europe
Country: United Kingdom

More jobs like this