Manager, Kernel Software

Europe

Cerebras Systems

Cerebras is the go-to platform for fast and effortless AI training. Learn more at cerebras.ai.

View all jobs at Cerebras Systems

Apply now Apply later

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.

About The Role 

As a Kernel TLM, you will lead a team of engineers at the intersection of hardware and software, developing high-performance solutions for cutting-edge AI and HPC workloads. You will collaborate with leaders from industry and academia to co-design software that fully harnesses the capabilities of our custom, massively parallel processor architecture. 

In this dual-role position, you will guide the technical roadmap, oversee the design and optimization of deep learning operations, and ensure the delivery of robust, high-performing kernel libraries. You will also manage and mentor a team of talented engineers, supporting their growth and fostering a culture of technical excellence, collaboration, and innovation. Your leadership will directly impact our ability to scale training workloads and deliver breakthroughs in performance and efficiency. 

Responsibilities 

  • Lead the design and development of high-performance ML and linear algebra kernels for the Cerebras WSE using parallel programming techniques. 
  • Guide a team building optimized low-level routines in assembly and a domain-specific C-like language. 
  • Use performance modeling to inform design and optimization decisions. 
  • Drive test development to ensure correctness and performance of kernel libraries. 
  • Evolve kernel architecture to support emerging ML models and workloads. 
  • Collaborate with hardware architects to influence future system design. 
  • Mentor engineers and foster a high-performing, collaborative team culture. 

Skills And Qualifications 

  • Bachelor’s, Master’s, PhD, or foreign equivalent in Computer Science, Computer Engineering, Mathematics, or a related field. 
  • Proven experience leading technical teams, including mentoring engineers, setting technical direction, and driving execution. 
  • Strong understanding of hardware architecture concepts and willingness to dive into new system architectures. 
  • Proficiency in C++ and Python; experience with low-level systems programming. 
  • Familiarity with library/API development best practices and performance optimization. 
  • Excellent debugging skills across complex, layered software stacks. 

Preferred Skills And Qualifications 

  • Experience leading teams in kernel development, performance optimization, or low-level systems programming. 
  • Strong background in parallel algorithms and distributed memory systems. 
  • Hands-on experience with accelerators such as GPUs, FPGAs, or other custom hardware. 
  • Familiarity with machine learning workloads and frameworks like TensorFlow and PyTorch
  • Understanding of HPC kernels and strategies for optimizing them on modern architectures. 

Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2025.

Apply today and become part of the forefront of groundbreaking advancements in AI!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Leadership Jobs

Tags: API Development APIs Architecture Computer Science Deep Learning Engineering Generative AI GPU HPC Linear algebra Machine Learning Mathematics ML models Open Source PhD Python PyTorch Research TensorFlow

Perks/benefits: Career development Startup environment

Region: Europe

More jobs like this