Research Scientist, Privacy & Security

Mountain View, California, US

DeepMind

Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science...

View all jobs at DeepMind

Apply now Apply later

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Snapshot

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The Role

As part of the Privacy & Security Team at Google DeepMind, you will play a key role in creating innovative defensive and offensive techniques to protect Gemini and other GenAI models!

GenAI models and agents are increasingly used to handle sensitive data and permissions alongside untrusted data. For them to operate in a secure, trustworthy, and reliable manner, there are many unsolved, impactful research problems including but not limited to:

  • Adversarially-robust reasoning, coding, and tool-use capabilities under prompt injection and jailbreak attacks.
  • Adherence to privacy norms, whether or not under adversarial prompting.
  • Adversarial techniques against generative models through multi-modal inputs.
  • New model architectures that are secure-by-design against prompt injections.

Key responsibilities:

There are many ways you can drive privacy & security research all the way from ideation, experimentation, to transformative landed impact:

  • Identify unsolved, impactful privacy & security research problems, inspired by the needs of protecting frontier capabilities. Research novel solutions through related work studies, offline and online experiments, and building prototypes and demos.
  • Verify the research ideas in the real world by driving and growing collaborations with Gemini teams working in safety, evaluations, and other related areas to land new innovations together.
  • Amplify the impact by generalizing solutions into reusable libraries and frameworks for protecting Gemini and product models across Google, and by sharing knowledge through publications, open source, and education.

About You

In order to set you up for success as a Research Scientist, Privacy & Security at Google DeepMind, we look for the following skills and experience:

  • Ph.D. in Computer Science or related quantitative field, or B.S./M.S. in Computer Science or related quantitative field with 5+ years of relevant experience.

In addition, any of the following would be an advantage:

  • Self-directed engineer/research scientist who can drive new research ideas from conception, experimentation, to productionisation in a rapidly shifting landscape.
  • Strong research experience and publications in ML security, privacy, safety, or alignment.
  • Experience with JAX, PyTorch, or similar machine learning platforms.
  • A track record on landing research impact within multi-team collaborative environments under senior stakeholders.

 

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Architecture Computer Science Gemini Generative AI Generative modeling JAX Machine Learning Open Source Privacy Prompt engineering PyTorch Research Security

Region: North America
Country: United States

More jobs like this