Lead Product Manager, Inference Cloud

Remote Office; Sunnyvale CA or Toronto Canada

Cerebras Systems

Cerebras is the go-to platform for fast and effortless AI training. Learn more at cerebras.ai.

View all jobs at Cerebras Systems

Apply now Apply later

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.

About The Role  

As a Lead Product Manager on our Inference Cloud team, you will define how developers harness the power of Cerebras wafer-scale AI speed to create the next-generation of AI applications.  

You will be responsible for setting and driving product strategy, roadmap, and requirements for Cerebras’ Inference Cloud API – the front door that delivers Cerebras’ wafer-scale inference speed to developers, enterprises, and platform partners around the world. You will shape how developers interact with the world's fastest GenAI models through our API, lead 3P integrations with open-source community frameworks, and invent the future of how developers can creatively leverage instant AI speed for their applications.  

In your role, you will work closely with developers every single day, growing adoption across start-ups and enterprises alongside our world-class GTM team.  

You should have a proven track record of working with developers as your users, empathizing with and understanding their needs, and building products they love.  

Leveling for this role will depend on applicant experience level, and can be adjusted for Senior – Principal PM.  

Responsibilities 

  • Define and own the product vision, strategy, and roadmap for the Cerebras Inference AI – balancing rapid iteration with long-term platform evolution to build the premier inference offering for the most ambitious and valuable AI applications.
  • Conduct user research and analyze usage data and feedback to uncover insights on user pain points, measure product success, and discover new opportunities. 
  • Drive strategic customer engagements to showcase the value of ultra-fast inference. 
  • Build open-source integrations that amplify the accessibility and capability of our solution. 
  • Lead cross-functional go-to-market execution with Sales, DevRel, Customer Success, and Marketing to deliver seamless user experiences and drive adoption. 
  • Stay on the cutting edge of API, developer-tools, and AI-infrastructure trends to keep Cerebras’ offering best-in-class. 

Skills And Qualifications 

  • 3-8+ years of experience as a Product Manager working on developer-focused SaaS products or cloud platforms. Hands-on experience launching or operating an inference, PaaS, or high-QoS API is a plus (e.g. Amazon Bedrock, Vertex A).
  • Strong technical background - able to understand API architecture and to partner with engineers- prior SWE experience is a plus. 
  • Strong grasp of GenAI models, their strengths and weaknesses, and perf cost drivers (latency, context length, batch size, rate limits).
  • Successfully launched zero-to-one products that have found distribution or commercial success. You have experience gathering feedback, input and data from users and running beta tests. 
  • Stellar written/verbal communication; comfortable presenting to execs, customers, and developer communities.
  • Ability to set clear KPIs and make data-driven tradeoffs. Experience instrumenting usage metrics, A/B tests, and setting goals. 
  • Ability to excel amidst ambiguity, and to figure out how to solve complex new problems with simple, elegant solutions.
  • Application-minded and passionate about working with customers to transform the future of AI with order-magnitude faster inference speeds.
  • BS/MS in CS, EE, or related; MBA a plus.

Assets

  • Passion and ability to rapidly prototype new AI use case demos.
  • Experience building products in the AI space.
  • Experience with both consumer and developer audiences.
  • Familiarity with OSS inference stacks (vLLM, SGLang, Dynamo).

Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2025.

Apply today and become part of the forefront of groundbreaking advancements in AI!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0

Tags: A/B testing APIs Architecture Excel Generative AI GPU KPIs Machine Learning Open Source Research Vertex AI vLLM

Perks/benefits: Career development Startup environment

Regions: Remote/Anywhere North America
Country: Canada

More jobs like this