Lead Data Engineer
Support Office India
Circle K
Circle K is a convenience store and gas station chain offering a wide variety of products for people on the go. Visit us today!Job Description
Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 16,700 stores in 31 countries, serving more than 9 million customers each day. At Circle K, we are building a best-in-class global data engineering practice to support intelligent business decision-making and drive value across our retail ecosystem. As we scale our engineering capabilities, we’re seeking a Lead Data Engineer to serve as both a technical leader and people coach for our India-based Data Enablement pod.
This role will oversee the design, delivery, and maintenance of critical cross-functional datasets and reusable data assets while also managing a group of talented engineers in India. This position plays a dual role: contributing hands-on to engineering execution while mentoring and developing engineers in their technical careers.
About the role
The ideal candidate combines deep technical acumen, stakeholder awareness, and a people-first leadership mindset. You’ll collaborate with global tech leads, managers, platform teams, and business analysts to build trusted, performant data pipelines that serve use cases beyond traditional data domains.
Responsibilities
Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
Architect data models and re-usable layers consumed by multiple downstream pods
Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
Mentoring and coaching team
Partner with product and platform leaders to ensure engineering consistency and delivery excellence
Act as an L3 escalation point for operational data issues impacting foundational pipelines
Own engineering best practices, sprint planning, and quality across the Enablement pod
Contribute to platform discussions and architectural decisions across regions
Job Requirements
Education
Bachelor’s or master’s degree in computer science, Engineering, or related field
Relevant Experience
7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
Knowledge and Preferred Skills
Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices.
Solid grasp of data governance, metadata tagging, and role-based access control.
Proven ability to mentor and grow engineers in a matrixed or global environment.
Strong verbal and written communication skills, with the ability to operate cross-functionally.
Certifications in Azure, Databricks, or Snowflake are a plus.
Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management).
Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools.
Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance).
Hands on experience in Databases like (Azure SQL DB, Snowflake, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting.
ADF, Databricks and Azure certification is a plus.
Technologies we use: Databricks, Azure SQL DW/Synapse, Snowflake, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI
#LI-DS1
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Azure CI/CD Computer Science Cosmos DB Databricks Data governance Data management Data pipelines Data quality Data Warehousing DevOps Docker ELT Engineering ETL Git GitHub Jenkins MySQL Pipelines Power BI PySpark Python Shell scripting Snowflake Spark SQL Terraform
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.