Manager, Data Engineering

Canada

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Apply now Apply later

At Wave, we help small businesses to thrive so the heart of our communities beats stronger.  We work in an environment buzzing with creative energy and inspiration. No matter where you are or how you get the job done, you have what you need to be successful and connected. The mark of true success at Wave is the ability to be bold, learn quickly and share your knowledge generously.
The Manager of Data Engineering will lead a team of talented data engineers, overseeing the design, development, and maintenance of our foundational data capabilities, architecture, and infrastructure.
This role is crucial for building robust, scalable data pipelines and platforms that support advanced analytics and business intelligence, ensuring our data systems are efficient, reliable, and aligned with organizational goals.

Here's How You Make An Impact:

  • Lead & Mentor: Lead, mentor, and grow a high-performing team of data engineers, fostering a collaborative and supportive environment.
  • Oversee Data Stack Development: Guide the team in designing, building, and deploying the components of a modern data stack, including CDC ingestion (using tools like Meltano or similar), a centralized Iceberg data lake, and a variety of batch, incremental, and stream-based pipelines.
  • Ensure Platform Stability & Scalability: Lead efforts to build and manage a fault-tolerant data platform that scales economically, balancing innovation with operational stability. This includes strategic oversight of maintaining legacy Python ELT scripts and accelerating the transition to dbt models in data warehouses like Redshift, Snowflake, or DataBricks.
  • Build a Long-Term Strategy: Contribute to the development and execution of data engineering roadmaps, ensuring alignment with the overall data and AI strategy and broader business objectives.
  • Drive Technical Excellence & Autonomy: Drive continuous improvement in data engineering practices, promoting best practices in coding, testing, and deployment. Cultivate an environment where engineers are self-motivated, can work autonomously, and thrive in ambiguous conditions by independently identifying opportunities to optimize pipelines and improve data workflows.
  • Foster Cross-Functional Data Collaboration: Collaborate closely with data scientists, analysts, product, and other engineering teams to understand data needs and integrate data solutions into various applications and decision-making processes. Ensure the team effectively supports other departments by reliably delivering data, analytics, and AI insights.
  • Ensure Data Quality & Governance: Implement and enforce data quality standards and best practices within data pipelines to ensure data integrity and reliability. Oversee the adherence to data governance policies, including data quality, lineage, and privacy, and ensure effective use of cataloging tools for discoverability and compliance.
  • Measure Performance: Define and track key performance indicators (KPIs) for data engineering initiatives to measure efficiency, impact on business outcomes, and drive accountability.

You Thrive Here By Possessing the Following:

  • Minimum of 5-7 years of experience in building data pipelines and managing a secure, modern data stack, with at least 2-3 years in a leadership or team lead role.
  • Proven track record of successfully leading the design, development, and implementation of robust data pipelines, ETL processes, and data platforms.
  • Deep understanding of modern data stack components, including CDC streaming ingestion using tools like Meltano or similar for data ingestion workflows that support AI/ML workloads, and a curated data warehouse in Redshift, Snowflake, or DataBricks.
  • At least 3 years of experience guiding teams working with AWS cloud infrastructure, including Kafka (MSK), Spark / AWS Glue, and infrastructure as code (IaC) using Terraform.
  • Expertise in overseeing the implementation and management of multi-stage workflows using Airflow or similar orchestration systems to automate and orchestrate data processing pipelines.
  • Strong familiarity with data governance practices, including data quality, lineage, and privacy, as well as experience using cataloging tools to enhance discoverability and compliance.
  • Knowledge and practical experience with Athena, Redshift, or Sagemaker Feature Store to support analytical and machine learning workflows is a definite bonus.
  • Excellent leadership and communication skills with the ability to influence and collaborate effectively across teams and levels of the organization.
  • Experience in financial technology and/or SaaS, or related industries preferred.
At Wave, we value diversity of perspective. Your unique experience enriches our organization. We welcome applicants from all backgrounds. Let’s talk about how you can thrive here!
Wave is committed to providing an inclusive and accessible candidate experience. If you require accommodations during the recruitment process, please let us know by emailing careers@waveapps.com. We will work with you to meet your needs.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Airflow AI strategy Architecture Athena AWS AWS Glue Business Intelligence Databricks Data governance Data pipelines Data quality Data warehouse dbt ELT Engineering ETL Kafka KPIs Machine Learning Pipelines Privacy Python Redshift SageMaker Snowflake Spark Streaming Terraform Testing

Perks/benefits: Career development

Region: North America
Country: Canada

More jobs like this