Research Engineer, Multimodal Safety

San Francisco

OpenAI

Introducing Sora: Creating video from text

View all jobs at OpenAI

Apply now Apply later

About the Team

Our team is dedicated to shaping the future of artificial intelligence by equipping ChatGPT with the ability to hear, see, speak, and create visually compelling images, transforming how people interact with AI in everyday life. We prioritize safety throughout the development process to ensure that our most advanced models can be safely deployed in real-world applications, ultimately benefiting society. This focus on safety is central to OpenAI’s mission of building and deploying safe AGI, reinforcing our dedication to AI safety and promoting a culture of trust and transparency.

About the Role

We are seeking a research engineer to pioneer innovative techniques that redefine safety, enhancing the comprehension and capabilities of our state-of-the-art multimodal foundation models. In this role, you will conduct rigorous safety assessments and develop methods, such as safety reward models and multimodal classifiers, to ensure our models are intrinsically compliant with safety protocols. You will also help with red teaming efforts to test the robustness of our models, collaborating closely with cross-functional teams, including safety and legal, to ensure our systems meet all safety standards and legal requirements.

The ideal candidate has a solid foundation in multimodal research and post training techniques, with a passion for pushing boundaries and achieving tangible impact. Familiarity with large suites of metrics or human data pipelines is a plus. You should be adept at writing high-quality code, developing tools for model evaluation, and iteratively improving our metrics based on real-world feedback. Strong communication skills are essential to work effectively with both technical and non-technical stakeholders.

This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

In this role, you will:

  • Build evaluation pipelines to assess risk along various axes, especially with multimodal inputs and outputs.

  • Implement risk mitigation techniques such as building safety reward models and RL. 

  • Develop and refine multimodal moderation models to detect and mitigate known and emerging patterns of AI misuse and abuse.

  • Work with other safety teams within the company to iterate on our content policies to ensure effective prevention of harmful behavior. 

  • Work with our human data team to conduct internal and external red teaming to examine the robustness of our harm prevention systems and identify areas for future improvement.

  • Write maintainable, efficient, and well-tested code as part of our evaluation libraries.

You might thrive in this role if you:

  • Are a collaborative team player – willing to do whatever it takes in a start-up environment.

  • Have experience working in complex technical environments.

  • Are passionate about bringing magical AI experiences to millions of users.

  • Enjoy diving into the subtle details of datasets and evaluations.

  • Have experience with multimodal research and post-training techniques.

  • Are very proficient in Python.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. 

OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Apply now Apply later
Job stats:  0  0  0

Tags: AGI ChatGPT Data pipelines GPT OpenAI Pipelines Privacy Python Research

Perks/benefits: Relocation support Startup environment

Region: North America
Country: United States

More jobs like this