Senior Data Engineer - Contract

Ontario, Canada | WFH | CAN-ON

Apply now Apply later

The Company

At Canopy Growth, our mission is clear: improve lives, end cannabis prohibition, and strengthen communities. We believe that cannabis can be a force for good. We’re building a consumer-centric organization that is focused on sharing the transformational potential of cannabis with the world. We will achieve this through an innovative and disruptive portfolio of cannabis and hemp-derived products.

Canopy Growth is the world's leading cannabis and hemp company. We recognize that employees are at the core of our success, and we take pride in a corporate culture that emphasizes inclusiveness, collaboration, and diversity.

Our employees come from a wide range of backgrounds, each bringing their own unique skills and talents to the table, working together to continue our incredible momentum of growth.  If you are interested in building global challenger brands, scaling a business, and working in a values-driven environment, we want to hear from you!

The Opportunity

Canopy Growth is currently seeking a Senior Data Engineer to join the IT team. The Senior Data Engineer is responsible for managing the operational support, maintenance, and development & evolution activities for data architecture and approaches with a goal of supporting data analytics and data science. You will work closely with analysts, and other stakeholders to build robust data pipelines, optimize data storage, and ensure data integrity. This role requires deep expertise in AWS technologies, a strong understanding of data engineering best practices, and the ability to lead complex projects.

Responsibilities 

  • Design and implement data architectures that meet business needs and support data integration, storage, and analytics. Develop strategies for data management and optimization. 

  • Build data pipelines while ensuring scalability, reliability, and performance of data systems. 

  • Develop and maintain ETL (Extract, Transform, Load) pipelines to integrate data from various sources into data warehouses and data lakes. 

  • Create and manage data models that support business intelligence and analytical needs. Ensure data is structured for efficient querying and analysis. 

  • Develop and support of real-time data pipelines built on AWS technologies including EMR, Lambda, Glue, s3, Kinesis, Redshift/Spectrum and Athena 

  • Work with vendor partners to design, build and deploy projects to data lake and data warehouse 

  • Reverse Engineer and document existing data pipelines  

  • Monitor and optimize data systems for performance, including query optimization, indexing, and efficient data storage practices. 

  • Work closely with cross-functional teams including data scientists, business analysts, and product managers to understand data requirements and deliver actionable insights. 

  • Ensure data accuracy, integrity, and consistency across systems. Implement data validation and cleansing processes. 

  • Automate data workflows and processes to increase efficiency and reduce manual intervention. Utilize tools and technologies for orchestration and automation. 

  • Oversee the end-to-end development of data solutions, from requirements gathering to deployment. Ensure projects are completed on time, within scope, and to the highest quality standards. 

  • Establish and enforce data governance policies to ensure data quality, security, and compliance with relevant regulations. 

  • Stay current with industry trends and emerging technologies. Propose and implement new tools and techniques to improve data processes and systems. 

  • Create and maintain comprehensive documentation for data processes, systems, and architectures. 

  • Provide guidance and mentorship to junior data engineers and team members. Share best practices and promote a culture of continuous learning. 

Experience

  • Bachelor’s degree in computer science, Engineering, Data Science, or a related field; Master’s degree is a plus. 

  • 5+ years of experience in data engineering, with a strong background in managing complex data projects. 

  • Experience with various RDBMS (Oracle, SQL Server, MySQL, AWS RDS, Postgres) Experience with data warehousing solutions and technologies (e.g., Amazon Redshift, Snowflake). 

  • Familiarity with ETL frameworks and data pipeline tools (e.g., Apache Airflow, Talend, DBT, Informatica). 

  • Proficient in working with structured and semi-structured data (JSON, Avro, Parquet) 

  • Expertise in programming languages such as Python, Java, or Scala. 

  • Experience deploying infrastructure as code with Terraform / Ansible / Cloud Formation 

  • Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. 

  • Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). 

  • Strong understanding of data modeling, data integration, and data quality best practices. 

  • Excellent Documentation skills 

  • Relevant industry certifications (e.g., AWS Certified Data Engineer etc.). 

  • Excellent communication and interpersonal skills. 

  • Analytical and problem-solving abilities. 

  • Ability to work independently and as part of a team. 

  • Detail-oriented with a commitment to high-quality deliverables. 

Other Details

This is a temporary contract role (approximately 10 months) based remotely out of Ontario.

We appreciate your interest, and promise to review all applications, but we will only be contacting those who best fit the requirements.

We welcome and encourage applications from people with disabilities. Accommodations are available upon request for candidates taking part in all aspects of the selection process. If you require accommodation, please notify your Talent Acquisition Partner. Please note, the chosen applicant will be required to successfully complete background and reference checks.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Airflow Ansible Architecture Athena Avro AWS Azure Business Intelligence Computer Science Data Analytics Data governance Data management Data pipelines Data quality Data warehouse Data Warehousing dbt Docker Engineering ETL GCP Google Cloud Informatica Java JSON Kinesis Kubernetes Lambda MySQL Oracle Parquet Pipelines PostgreSQL Python RDBMS Redshift Scala Security Snowflake SQL Talend Terraform

Perks/benefits: Career development Startup environment Team events

Regions: Remote/Anywhere Middle East North America
Country: Canada