Software Engineer - Compiler, Kernels, Runtime
San Francisco
About the Team
The scaling team team builds foundational components that power OpenAI’s ML training infrastructure. We focus on developing scalable, robust, and high-performance systems that maximize the productivity of our researchers and hardware. Our mission is to accelerate progress toward AGI by enabling the fastest iteration cycles and highest throughput for model development at scale.
About the Role
As a software engineer on the Scaling team, you’ll help build and optimize the low-level stack that orchestrates computation and data movement across OpenAI’s supercomputing clusters. Your work will involve designing high-performance runtimes, building custom kernels, contributing to compiler infrastructure, and developing scalable simulation systems to validate and optimize distributed training workloads.
You will work at the intersection of systems programming, ML infrastructure, and high-performance computing, helping to create both ergonomic developer APIs and highly efficient runtime systems. This means balancing ease of use and introspection with the need for stability and performance on our evolving hardware fleet.
This role is based in San Francisco, CA, with a hybrid work model (3 days/week in-office). Relocation assistance is available.
Responsibilities:
Design and build APIs and runtime components to orchestrate computation and data movement across heterogeneous ML workloads.
Contribute to compiler infrastructure, including the development of optimizations and compiler passes to support evolving hardware.
Engineer and optimize compute and data kernels, ensuring correctness, high performance, and portability across simulation and production environments.
Profile and optimize system bottlenecks, especially around I/O, memory hierarchy, and interconnects, at both local and distributed scales.
Develop simulation infrastructure to validate runtime behaviors, test training stack changes, and support early-stage hardware and system development.
Rapidly deploy runtime and compiler updates to new supercomputing builds in close collaboration with hardware and research teams.
Work across a diverse stack, primarily using Rust and Python, with opportunities to influence architecture decisions across the training framework.
You might thrive in this role if you:
Have a deep curiosity for how large-scale systems work and enjoy making them faster, simpler, and more reliable.
Are proficient in systems programming (e.g., Rust, C++) and scripting languages like Python.
Have experience in one or more of the following areas: compiler development, kernel authoring, accelerator programming, runtime systems, distributed systems, or high-performance simulation.
Are excited to work in a fast-paced, highly collaborative environment with evolving hardware and ML system demands.
Value engineering excellence, technical leadership, and thoughtful system design.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Tags: AGI APIs Architecture Distributed Systems Engineering Machine Learning ML infrastructure ML models OpenAI Privacy Python Research Rust
Perks/benefits: Relocation support Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.