Compiler Software Engineer, Staff

Toronto, Ontario, Canada

d-Matrix

d-Matrix is making Generative AI inference blazing fast, sustainable and commercially viable with the world’s first efficient memory-compute integration.

View all jobs at d-Matrix

Apply now Apply later

At d-Matrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration.

We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution.  Ready to come find your playground? Together, we can help shape the endless possibilities of AI. 

Location:

Remote/Hybrid (working on-site at our Toronto, Ontario, Canada office 2 days per week)

Role: Compiler Software Engineer, Staff

What You Will Do:

The d-Matrix compiler team is looking for exceptional candidates to help develop the front end of our machine learning compiler. The successful candidate will work on designing, optimizing, and lowering high-level machine learning representations to intermediate representations suitable for further compilation.

We are particularly interested in candidates who can contribute to

  • High-level IR transformations (e.g., graph optimization, operator fusion, canonicalization)

  • Dialect and IR design for machine learning frameworks

  • Lowering and transformation of ML models from frameworks such as PyTorch, TensorFlow, and ONNX to compiler IRs such as MLIR and LLVM

  • Performance optimization for compute graphs, including operator specialization, fusion, and memory layout transformations

  • Model partitioning techniques, including

    • Graph-based parallelism strategies (e.g., pipelined model parallelism, tensor parallelism, and data parallelism)

    • Automatic partitioning of large models across multiple devices using techniques like GSPMD (Generalized SPMD Partitioning)

    • Placement-aware optimizations to minimize communication overhead and improve execution efficiency on distributed hardware

The successful candidate will join a team of experienced compiler engineers and work closely with ML framework developers, hardware architects, and performance engineers to ensure efficient model execution.

What You Will Bring:

  • Minimum Qualifications:
    Bachelor's degree in computer science or a related field with 6+ years of relevant industry experience (or MS with 5+ years of experience or PhD with 3+ years of experience)

  • Strong proficiency in modern C++ (C++14/17/20) and compiler development.

  • Experience with modern compiler infrastructures such as LLVM, MLIR, or equivalent frameworks

  • Experience with machine learning frameworks (e.g., PyTorch, TensorFlow, ONNX).

  • Solid understanding of graph-level optimizations and IR transformations in ML compilers

  • Experience with model partitioning strategies such as GSPMD, Sharding, and Distributed Execution

Preferred Qualifications:

  • Algorithm design experience, from conceptualization to implementation

  • Experience with Open-Source ML compiler projects, such as Torch-MLIR, IREE, XLA, or TVM

  • Experience with automatic differentiation, shape inference, and type propagation in ML compilers

  • Experience optimizing distributed execution of large models on accelerators (e.g., GPUs, TPUs, custom AI hardware)

  • Passion for working in a fast-paced and dynamic startup environment

Equal Opportunity Employment Policy

d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.

d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: C++ Computer Science Generative AI Machine Learning ML models ONNX Open Source PhD PyTorch TensorFlow

Perks/benefits: Career development Startup environment

Regions: Remote/Anywhere North America
Country: Canada

More jobs like this