Data Engineer (Brahma)

United Kingdom

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

DNEG

We are DNEG - delivering award-winning visual effects, animation, and creative technologies for film, TV, and immersive content.

View all jobs at DNEG

Apply now Apply later

Brahma is a pioneering enterprise AI company developing Astras, AI-native products built to help enterprises and creators innovate at scale. Brahma enables teams to break creative bottlenecks, accelerate storytelling, and deliver standout content with speed and efficiency. Part of the DNEG Group, Brahma brings together Hollywood’s leading creative technologists, innovators in AI and Generative AI, and thought leaders in the ethical creation of AI content. Job Description
As a Data Engineer, you’ll architect and maintain the pipelines that power our products and services. You’ll work at the intersection of ML, media processing, and infrastructure; owning the data tooling and automation layer that enables scalable, high-quality training and inference. If you’re a developer who loves solving tough problems and building efficient systems, we want you on our team. Key Responsibilities
  • Design and maintain scalable pipelines for ingesting, processing, and validating datasets with main focus visual and voice data.
  • Work with other teams to identify workflow optimisation potential, design and develop automation tools, using AI-driven tools and custom model integrations and scripts.
  • Write and maintain tests for pipeline reliability.
  • Build and maintain observability tooling in collaboration with other engineers to track data pipeline health and system performance.
  • Collaborate with data scientists, operators, and product teams to deliver data solutions.
  • Debug and resolve complex data issues to ensure system performance.
  • Optimise storage, retrieval, and caching strategies for large media assets across environments.
  • Deploy scalable data infrastructure using cloud platforms as well as on-premise and containerization.
  • Deepen your knowledge of machine learning workflows to support AI projects.
  • Stay current with industry trends and integrate modern tools into our stack.
 Must Haves
  • 3+ years in data engineering or related backend/infrastructure role.
  • Strong programming skills in Python or similar languages.
  • Experience with software development lifecycle (SDLC) and CI/CD pipelines.
  • Proven experience building and testing data pipelines in production.
  • Proficiency in Linux.
  • Solid SQL knowledge.
  • Experience with Docker or other containerisation technologies.
  • Proactive approach to solving complex technical challenges.
  • Passion for system optimisation and continuous learning.
  • Ability to adapt solutions for multimedia data workflows.
 

Nice to Have

  • Experience with Kubernetes (k8s).
  • Knowledge of machine learning or AI concepts.
  • Familiarity with ETL tools or big data frameworks.
  • Familiarity with cloud platforms (e.g., AWS, GCP, Azure).
 

About You

  • Innovative
  • Like challenges
  • Adaptable
  • Calm under pressure
  • Strong communication abilities
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: AI content AWS Azure Big Data CI/CD Data pipelines Docker Engineering ETL GCP Generative AI Kubernetes Linux Machine Learning Pipelines Python SDLC SQL Testing

Perks/benefits: Career development

Region: Europe
Country: United Kingdom

More jobs like this