Software Engineer, Fleet Management - Research
San Francisco
The Fleet team at OpenAI supports the computing environment that powers our cutting-edge research and product development. We oversee large-scale systems that span data centers, GPUs, networking, and more, ensuring high availability, performance, and efficiency. Our work enables OpenAI’s models to operate seamlessly at scale, supporting both internal research and external products like ChatGPT. We prioritize safety, reliability, and responsible AI deployment over unchecked growth.
About the Role
The Software Engineer, Operating Systems & Orchestration will focus on building systems to manage hardware, configurations, vendors, and the people interacting with our infrastructure. You will design and develop solutions that integrate individual nodes and servers into unified clusters, directly contributing to advancing AI research by streamlining the overall research user experience. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Design and build systems to manage both cloud and bare-metal fleets at scale.
Develop tools that integrate low-level hardware metrics with high-level job scheduling and cluster management algorithms.
Leverage LLMs to coordinate vendor operations and optimize infrastructure workflows.
Automate infrastructure processes, reducing repetitive toil and improving system reliability.
Collaborate with hardware, infrastructure, and research teams to ensure seamless integration across the stack.
Continuously improve tools, automation, processes, and documentation to enhance operational efficiency.
You might thrive in this role if you:
Have strong software engineering skills with experience in large-scale infrastructure environments.
Possess broad knowledge of cluster-level systems (e.g., Kubernetes, CI/CD pipelines, Terraform, cloud providers).
Have deep expertise in server-level systems (e.g., systemd, containerization, Chef, Linux kernels, firmware management, host routing).
Are passionate about optimizing the performance and reliability of large compute fleets.
Thrive in dynamic environments and are eager to solve complex infrastructure challenges.
Value automation, efficiency, and continuous improvement in everything you build.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Tags: ChatGPT CI/CD Engineering GPT Kubernetes Linux LLMs OpenAI Pipelines Privacy Research Responsible AI Terraform
Perks/benefits: Relocation support Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.