Big Data Engineering Manager| ML ops

Gurgaon

dunnhumby

Global leader in Customer data science, retail media and analytics, experts in working with brands, grocery retail, retail pharmacy, and retailer financial services.

View all jobs at dunnhumby

Apply now Apply later

dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First.

 

Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First.

 

dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro.

We are seeking a highly motivated and experienced Big Data Engineering Managerto lead the design, development, and delivery of robust, scalable data processing platforms. This leadership role involves overseeing the end-to-end execution of big data solutions while fostering innovation, ensuring high performance, and driving alignment with business goals. The ideal candidate will have a proven track record in managing technical teams, implementing distributed systems, and delivering high-quality big data architectures. 

This leadership role within the Retail Media Platform is designed to address the evolving challenges of modern grocery retail. By unlocking new revenue streams, driving sales growth, and fostering stronger customer connections, this role is pivotal to shaping the future of retail media. Our team is committed to empowering brands and retailers to measure the impact of advertising across all channels with precision and efficiency, leveraging best-in-class dunnhumby customer data science to deliver seamless, data-driven insights.

Key Responsibilities

Platform Development & Delivery

  • Architect and deliver scalable data processing platforms for batch and streaming workflows.
  • Develop ETL pipelines, data ingestion frameworks, and data lake solutions.
  • Optimize platforms for performance, reliability, and scalability in cloud and hybrid environments.

Collaboration & Stakeholder Engagement

  • Partner with product, analytics, and business teams to define data platform needs.
  • Act as the technical liaison between business goals and engineering execution.

Technical Expertise

  • Build distributed data systems with tools like Spark, Flink, Kafka, and Hive.
  • Ensure data platform designs meet security and compliance standards.
  • Champion modern architecture practices, including serverless and event-driven designs.

Process & Quality Assurance

  • Establish best practices for CI/CD, testing, and code reviews.
  • Implement robust monitoring and observability frameworks.

Team Leadership & Management

  • Lead and mentor a team of data engineers, fostering growth and performance.
  • Cultivate a culture of innovation, collaboration, and excellence.

Innovation & Continuous Improvement

  • Evaluate and integrate emerging technologies to enhance platform capabilities.
  • Drive continuous improvement in engineering practices and infrastructure.

Skills & Qualifications

Must-Have Skills

  • Leadership: Proven experience in scaling and managing data engineering teams.
  • Big Data Tools: Hands-on expertise with Spark, Kafka, Hive, and Airflow.
  • Cloud Platforms: Proficiency in GCP, AWS, or Azure for big data solutions.
  • Data Architecture: Experience designing scalable, distributed data platforms.
  • Programming: Advanced skills in Python, Java, or Scala.
  • Processing Workflows: Expertise in building real-time and batch data pipelines.
  • DevOps: Proficiency in CI/CD, Docker, Kubernetes, and Terraform.
  • Data Governance: Understanding of frameworks like GDPR and CCPA.
  • Problem Solving: Ability to tackle complex challenges with innovative solutions.
  • Communication: Strong skills to align cross-functional stakeholders.

Nice-to-Have Skills

  • Experience with ML Ops frameworks (e.g., Kubeflow, Vertex AI).
  • Knowledge of metadata management and data cataloging tools.
  • Familiarity with event-driven architectures and streaming frameworks.
  • Expertise with cloud-native big data services like BigQuery, Redshift, or Snowflake.

What you can expect from us

We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect.  Plus, thoughtful perks, like flexible working hours and your birthday off.

You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn.

And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof.  We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. For an informal and confidential chat please contact stephanie.winson@dunnhumby.com to discuss how we can meet your needs. 

Our approach to Flexible Working

At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work.

We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process.

For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here)

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Airflow Architecture AWS Azure Big Data BigQuery CI/CD Data governance Data pipelines DevOps Distributed Systems Docker Engineering ETL Flink GCP Java Kafka Kubeflow Kubernetes Machine Learning Pipelines Privacy Python Redshift Scala Security Snowflake Spark Streaming Terraform Testing Vertex AI

Perks/benefits: Career development Flex hours Flex vacation Startup environment

Region: Asia/Pacific
Country: India

More jobs like this