Senior Data Engineering Specialist
Guelph, Ontario, Canada; Regina, Saskatchewan, Canada; Calgary, Alberta, Canada
Co-operators
Offering Home, Auto, Life, Business, Travel and Farm insurance plus Investments and Group coverage, benefits and retirement plans in Canada for over 70 years.Company: CGL
Department: IT
Employment Type: Regular Full-Time
Work Model: Hybrid
Language: This role operates in English
Additional Information: This/these role(s) is/are currently vacant
The Opportunity:
We are a leading Canadian financial services co-operative committed to being a catalyst for a sustainable and resilient society and our team is essential to deliver on this strategy. That’s why we prioritize our people, to ensure we provide a strong culture and development opportunities which enables our team to thrive and to live our purpose. The best part is that you will work with people that care passionately about you, our clients, and our communities.
Our Information Technology team aspires to be a leader in applying technology to power business strategies. We connect concepts with solutions to create value and efficiencies for our clients, employees, and communities. Our success is driven by our skilled and diverse team who are passionate about excellence, innovation, and agility.
The Sr Data engineering Specialist drives the development and optimization of data products, ensuring the reliability and performance of ETL processes. This role requires a high level of autonomy in managing complex data pipelines, the ability to collaborate with key partners, and to support junior team members while delivering high-quality data for analysis and decision-making.
How you will create impact:
- Execute data extraction, cleaning, reconciliation and assess its characteristics, such as data quality.
- Use data engineering tools for data querying (ex: SQL), data handling (ex: pySpark, stored procedures), data storage (ex: ADLS).
- Contribute to building data solutions (data pipelines, APIs and more).
- Bring data to visualization tools by implementing the semantic layer.
- Proactively apply performance optimizations and monitoring of data solutions, such as alerts.
- Use DevOps platforms (ex: GitHub) and manage infrastructure deployments through code (ex: Terraform).
- Configure and use continuous integration and deployment (CI/CD) pipelines, and our infrastructure (ex: Databricks environments) to properly develop and deploy data solutions.
- Promote and comply with security requirements on the platforms and environments used by our data solutions.
- Identify opportunities to collaborate, innovate, and make data-driven decisions, with the ultimate goal of bringing value to the business.
- Follow governance and security guidelines when handling data.
- Partner with business and technology stakeholders to drive business value.
- Effectively communicate findings to colleagues and a business audience, and demonstrate influence.
How you will succeed:
- You have an innovative mindset to improve operational efficiencies and ability to influence change, with a primary focus on client needs.
- You use critical thinking skills to recognize assumptions, evaluate arguments, draw conclusions and proactively propose solutions.
- You have strong communication skills to clearly convey messages and explore diverse points of view.
- You build trusting relationships and provide guidance to support the development of colleagues.
To join our team:
- 5+ years of experience in data engineering with experience in building modern data platforms and data products.
- 5+ years of experience implementing data solutions. (Azure preferred).
- You have completed post-secondary education in Information Technology, Computer Science or a related discipline.
- Experience with distributed computing (ex. Spark) and infrastructure as code (ex. Terraform) is a major asset.
- You have advance knowledge of data modeling, ETL processes, data warehousing concepts, and best practices in data engineering and analytics.
- Advance knowledge of cloud security and networking is an asset.
- Hands-on experience with machine learning technologies is an asset (ex. Azure ML).
- Hands-on experience with Azure PaaS is an asset: Azure Data Lake Storage (ADLS), Azure Databricks, Microsoft Fabric, Azure Data Factory (ADF), Azure Synapse, Event Hub, API Management (APIM), Azure Key Vault, Azure SQL, and Purview.
What you need to know:
- You will travel regularly.
- You will be subject to a Background check as a condition of employment, in the event you are the successful candidate.
What’s in it for you?
- Training and development opportunities to grow your career.
- Flexible work options and paid time off to support your personal and family needs.
- A holistic approach to your well-being, with physical and mental health programs and a supportive workplace culture.
- Paid volunteer days to give back to your community.
- In addition to our competitive salary and incentive programs, eligible employees also benefit from a comprehensive total rewards package including group retirement savings plans, pension and benefits (e.g., health and wellness, dental, disability and life coverage), mental health support and an employee assistance program.
#LI-AG1
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: APIs Azure CI/CD Computer Science Databricks Data pipelines Data quality Data Warehousing DevOps Engineering ETL GitHub Machine Learning Pipelines PySpark Security Spark SQL Terraform
Perks/benefits: Career development Competitive pay Flex vacation Health care Wellness
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.