Senior Data Science Engineer

Gurgaon

dunnhumby

Global leader in Customer data science, retail media and analytics, experts in working with brands, grocery retail, retail pharmacy, and retailer financial services.

View all jobs at dunnhumby

Apply now Apply later

dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First.

 

Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First.

 

dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro.

We are looking for a talented engineer to develop and maintain tools/products aimed at making the data science development work easier. The right candidate will be passionate about learnings, delivering high-quality codes and cutting-edge technologies in Bigdata landscape. The candidate will have evangelical mindset to identify, practice and promote better working methods in the fields of distributed computing to be used in data science engineering.  

What we expect from you 

  1. Core Engineering Responsibilities
  • Develop and maintain engineering tools/products needed for simple and efficient data science development. 
  • Analyse complex data pipelines to identify performance bottlenecks and suggest robust ways to optimize the workload within reasonable costs. 
  • Collaborate with Infrastructure and Data Science Platform teams to ensure ensure robust, performant and scalable environments are available for building data science workflows  
  1. Data Engineering & Processing Expertise
  • Proficiency in Hadoop, PySpark, Pandas, NumPy, and Python (version > 3.5). 
  • Strong working experience with data partitioning, joins, caching, handling data skewness, and general code optimization. 
  • Solid knowledge of SQL and Airflow (or equivalent orchestration tools like Apache NiFi, Luigi). 
  • Experience with file formats and storage optimization (e.g., Parquet, Avro). 
  1. Cloud & Kubernetes Knowledge
  • Strong working knowledge of Kubernetes and Docker for deploying, managing, and scaling applications. 
  • Experience with cloud platforms (Azure/GCP) to leverage Kubernetes services, such as GKE (Google Kubernetes Engine) or AKS (Azure Kubernetes Service), for cloud-based, containerized workflows. 
  1. Application Development & Integration
  • Experience in developing web applications using modern frontend and backend frameworks (e.g., Node.js, React, Dash, Django, Voila, Streamlit). 
  • Proficiency in web application development and API integration (REST APIs, Web services) to support data science tools and workflows. 
  1. Automation & DevOps Skills
  • Proficiency in shell scripting and working knowledge of DevOps/DataOps practices. 
  • Familiarity with CI/CD pipelines and automation for Kubernetes deployments. 
  • Experience in re-engineering, automating, and productionizing code, particularly for containerized environments. 
  1. Advanced & Good-to-have Skills
  • Experience optimizing large-scale NLP or generative models in production, with knowledge of GPU utilization, distributed computing, and cloud-based scaling for efficient model workflows. 
  • Exposure to handling unstructured data types, including text and images, with an emphasis on designing and optimizing workflows for large-scale data processing. 

 

What you can expect from us

We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect.  Plus, thoughtful perks, like flexible working hours and your birthday off.

You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn.

And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. Everyone’s invited.

Our approach to Flexible Working

At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work.

We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process.

For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here)

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Airflow APIs Avro Azure CI/CD DataOps Data pipelines DevOps Django Docker Engineering GCP Generative modeling GPU Hadoop Kubernetes NiFi NLP Node.js NumPy Pandas Parquet Pipelines Privacy PySpark Python React Shell scripting SQL Streamlit Unstructured data

Perks/benefits: Career development Flex hours

Region: Asia/Pacific
Country: India

More jobs like this