Lead IT Data Engineering Specialist

Guelph, Ontario, Canada; Mississauga, Ontario, Canada

Co-operators

Offering Home, Auto, Life, Business, Travel and Farm insurance plus Investments and Group coverage, benefits and retirement plans in Canada for over 70 years.

View all jobs at Co-operators

Apply now Apply later

Company: CGIC
Department: IT
Employment Type: Regular Full-Time 
Work Model: Hybrid
Language: This role operates in English
Additional Information: This/these role(s) is/are currently vacant

 

The Opportunity: 

We are a leading Canadian financial services co-operative committed to being a catalyst for a sustainable and resilient society and our team is essential to deliver on this strategy. That’s why we prioritize our people, to ensure we provide a strong culture and development opportunities which enables our team to thrive and to live our purpose. The best part is that you will work with people that care passionately about you, our clients, and our communities. 

Our Information Technology team aspires to be a leader in applying technology to power business strategies. We connect concepts with solutions to create value and efficiencies for our clients, employees, and communities. Our success is driven by our skilled and diverse team who are passionate about excellence, innovation, and agility.

The Lead Data Engineering Specialist drives the technical detail design, oversees the development and optimization of data products, ensuring the reliability and performance of ETL processes. This role involves the design and management of complex data pipelines, to take responsibility for the implementation of solutions that comply with architectural requirements, and to collaborate with key partners to meet business objectives. The lead also mentors juniors, sharing expertise to foster their growth.

 

How you will create impact:

  • Lead the development of data solutions, ensuring proper data quality and its monitoring, and proper integration aligned with the medallion architecture.
  • Create and maintain data models and architectural diagrams.
  • Use data engineering tools for data querying (ex: SQL), data handling (ex: pySpark, stored procedures), data storage (ex: ADLS).
  • Lead the design of the semantic layer.
  • Proactively apply performance optimizations and monitoring of data solutions, such as alerts.
  • Lead and enforce proper documentation, to ensure ongoing maintenance and updates. 
  • Use DevOps platforms (ex: GitHub) and manage infrastructure deployments through code (ex: Terraform).
  • You are accountable that the outcome of the projects you lead meet the business objectives.
  • Identify and implement engineering best practices and process improvements to enhance the efficiency and effectiveness of our data solutions.
  • Ensure governance and security guidelines are followed.

 

How you will succeed:

  • You have an innovative mindset to improve operational efficiencies and ability to influence change, with a primary focus on client needs.    
  • You use critical thinking skills to recognize assumptions, evaluate arguments, draw conclusions and proactively propose solutions.  
  • You have strong communication skills to clearly convey messages and explore diverse points of view.  
  • You build trusting relationships and provide guidance to support the development of colleagues. 

 

To join our team:

  • 5+ years of experience in data engineering with experience in building modern data platforms and data products.  
  • 3+ years of experience with distributed computing (ex. Spark) and infrastructure as code (ex. Terraform).
  • 3+ years of experience designing and implementing scalable, cloud-based data solutions (Azure preferred).
  • Deep understanding of data modeling, ETL processes, data warehousing concepts, and best practices in data engineering and analytics.
  • Expert knowledge of cloud security and networking is an asset.
  • Vast hands-on experience with the following Azure PaaS is an asset: Azure Data Lake Storage (ADLS), Azure Databricks, Microsoft Fabric, Azure Data Factory (ADF), Azure Synapse, Event Hub, API Management (APIM), Azure Key Vault, Azure SQL, and Purview. Have a good understanding of the backup, disaster recovery, and data recovery strategy and execution with the above services.   
  • Ability to interact with peers and stakeholders to define and drive product and business impact.

 

What you need to know:

  • You will travel occasionally. 
  • You will be subject to a Background check as a condition of employment, in the event you are the successful candidate. 

 

What’s in it for you?

  • Training and development opportunities to grow your career.
  • Flexible work options and paid time off to support your personal and family needs.
  • A holistic approach to your well-being, with physical and mental health programs and a supportive workplace culture.
  • Paid volunteer days to give back to your community.
  • In addition to our competitive salary and incentive programs, eligible employees also benefit from a comprehensive total rewards package including group retirement savings plans, pension and benefits (e.g., health and wellness, dental, disability and life coverage), mental health support and an employee assistance program.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: APIs Architecture Azure Databricks Data pipelines Data quality Data Warehousing DevOps Engineering ETL GitHub Pipelines PySpark Security Spark SQL Terraform

Perks/benefits: Career development Competitive pay Flex vacation Health care Wellness

Region: North America
Country: Canada

More jobs like this