Data Engineer

Support Office India

Circle K

Circle K is a convenience store and gas station chain offering a wide variety of products for people on the go. Visit us today!

View all jobs at Circle K

Apply now Apply later

Job Description

Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with 16,700 stores in 31 countries, serving more than 9 million customers each day.

It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With strong data engineering expertise, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success.

About the role:

We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on activities like data ingestion, transformation, and storage including delivery of data from enterprise business systems to data lake/delta lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. The Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualization teams.

Responsibilities:

  • Collaborate with business stakeholders and other technical team members to develop, test, and support the data applications that are most relevant to business needs and goals.
  • Demonstrate technical and domain knowledge of relational (RDBMS) and No-SQL databases, Data Warehouse, Data Lake/Delta Lake among other structured and unstructured storage options.
  • Determine solutions that are best suited to develop an optimal data pipeline for a particular data source.
  • Develop data flow pipelines to extract, transform, and load it from various data sources and provide regular operational support for the smooth business operations using industry best-practices.
  • Efficient in ETL/ELT development using Azure cloud services, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance).
  • Write custom code/scripts to extract the data from unstructured/semi-structured sources.
  • Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders.
  • Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability).
  • Stay current with and adopt new tools and applications to ensure high quality and efficient solutions.
  • Build cross-platform data strategy to aggregate multiple sources and process development datasets.
  • Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. 

Qualifications

  • Full-Time bachelor’s degree in engineering/technology, computer science, information technology, or related fields.
  • 3+ years of relevant experience in ETL/ELT design, development, testing, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment.
  • 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Power BI.
  • Hands-on experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Python/PySpark, Logic Apps, Key Vault, and Azure functions.
  • Good experience in defining and enabling data quality standards for auditing, and monitoring purposes.
  • In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts and production support of existing applications.
  • Hands-on experience in REST API development and JSON and API management using azure.
  • Strong collaboration, teamwork skills, excellent written and verbal communications skills.
  • Self-starter and motivated with ability to work in a fast-paced development environment.
  • Proficiency in the development environment like Azure cloud, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools.
  • Working experience with CI/CD using Azure DevOps and change and release management.

Preferred Skills:

  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management).
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools.
  • Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance).
  • Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting.
  • Azure Data Factory (ADF), Databricks certification is a plus.

Technologies we use: Azure Data Factory, Databricks, Azure Synapse, Azure Tabular, Azure Functions, Logic Apps, Key Vault, DevOps, Python, PySpark, Scripting (PowerShell, Bash), Git, Terraform, Power BI, Snowflake

#LI-DS1

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: API Development APIs Azure CI/CD Computer Science Cosmos DB Databricks Data management Data pipelines Data quality Data strategy Data warehouse Data Warehousing DevOps ELT Engineering ETL Git Jenkins JSON MySQL NoSQL Pipelines Power BI PySpark Python RDBMS REST API Shell scripting Snowflake SQL SSIS Terraform Testing

Region: Asia/Pacific
Country: India

More jobs like this