Software Engineer or Research Engineer, Model Serving

London, UK

DeepMind

Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science and benefit humanity.

View company page

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

 

Snapshot

We are searching for a talented engineer passionate about bridging the gap between research and production for cutting-edge AI models. In this role, you'll play a key part in accelerating the deployment of Google DeepMind's research onto various product surfaces within Google. Your work will involve scaling and optimizing our serving infrastructure, collaborating with research teams to ensure models are production-ready, and identifying ways to streamline the entire research-to-production process. This is a unique opportunity to directly impact the speed and efficiency with which Google delivers innovative AI-powered products and features to users.

 

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The Role

In this role, you will be at the forefront of bringing cutting-edge AI research to life.  You'll work directly with researchers and engineers to optimize and deploy large language models (LLMs) onto Google's production infrastructure, impacting users across a diverse range of applications. This involves a blend of technical expertise and collaborative problem-solving to ensure both efficiency and quality throughout the entire LLM deployment lifecycle.

Key responsibilities:

  • Bridge the infrastructure gap between research and production: Collaborate closely with research teams to understand next generation modeling approaches, ensuring they are designed and implemented with production considerations in mind.
  • Optimize the serving environment: Work with infrastructure teams to deliver serving infrastructure that is designed for maximum efficiency and performance, addressing bottlenecks in speed and scale.
  • Streamline the deployment process: Identify opportunities to automate tasks, eliminate redundancies, and improve the overall velocity of model releases.
  • Develop expertise in model serving technologies:  Gain a deep understanding of serving frameworks, preprocessing pipelines, caching mechanisms, and other relevant technologies.
  • Stay informed on industry trends: Continuously learn about new technologies and best practices in the field of AI research and deployment.

This is a chance to make a real difference in the way Google develops and deploys AI, directly impacting the speed and effectiveness with which we deliver innovative solutions to users.

About You

In order to set you up for success as a Software Engineer at Google DeepMind,  we look for the following skills and experience:

  • Interpersonal skills, such as discussing technical ideas effectively with colleagues and collaborating with other roles
  • Excellent knowledge of either C++ or Python
  • Experience with deployment in production environments
  • Experience with developing serving infrastructure
  • Familiarity or experience with optimisation of distributed ML systems
  • Familiarity with modern HW accelerators (GPU / TPU)

Application deadline:  5pm BST, Friday 3rd May 2024

 

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: GPU LLMs Machine Learning Pipelines Python Research

Region: Europe
Country: United Kingdom
Job stats:  12  6  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.