Distributed Systems Engineer, AI Inference Platform
Sunnyvale CA or Toronto Canada
Cerebras Systems
Cerebras is the go-to platform for fast and effortless AI training. Learn more at cerebras.ai.Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.
About The Role:
We are looking for a highly skilled Software Engineer specializing in Distributed Systems to join the team developing the Cerebras Inference Platform. You will be responsible for the architecture, design, and implementation of the distributed systems components that ensure the platform's scalability, reliability, low latency, and high throughput. If you are passionate about building robust, performant, and fault-tolerant distributed systems for demanding AI applications, this is a unique opportunity.
Responsibilities:
- Design, build, and operate foundational distributed systems components that power the Inference Platform with high availability, scalability, and performance.
- Architect and implement the core logic for distributed request routing, dynamic load balancing, replica synchronization, and distributed metadata management.
- Develop and enhance the fault tolerance and auto-recovery mechanisms for platform services and inference replicas.
- Optimize communication patterns and data flow between microservices to ensure minimal latency and maximal throughput at scale.
- Contribute to the design and implementation of the distributed orchestration and scheduling system for managing inference workloads and resources.
- Implement and refine monitoring, tracing, and alerting for distributed system components to ensure operational excellence.
- Collaborate closely with hardware, ML, and other software teams to ensure seamless integration and end-to-end system performance.
- Debug complex issues spanning multiple services and systems in a distributed environment.
⠀Skills & Qualifications:
- Bachelor’s or master's degree in computer science or related field, or equivalent practical experience.
- 5+ years of software engineering experience, with a strong focus on distributed systems architecture and optimization.
- Deep understanding of distributed systems principles.
- Proven experience with container orchestration technologies, particularly Kubernetes (K8s).
- Strong programming skills in Python. C++ experience is a plus.
- Experience with distributed messaging systems or RPC frameworks.
- Experience designing for high availability, fault tolerance, and scalability.
- Strong debugging and performance analysis skills in distributed environments.
- Familiarity with cloud-native technologies and microservices architectures.
Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU.
- Publish and open source their cutting-edge AI research.
- Work on one of the fastest AI supercomputers in the world.
- Enjoy job stability with startup vitality.
- Our simple, non-corporate work culture that respects individual beliefs.
Read our blog: Five Reasons to Join Cerebras in 2025.
Apply today and become part of the forefront of groundbreaking advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture Computer Science Distributed Systems Engineering Generative AI GPU Kubernetes Machine Learning Microservices Open Source Python Research
Perks/benefits: Career development Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.