Member of Technical Staff - Machine Learning Research Engineer; VLM's

San Francisco

Liquid AI

We build capable and efficient general-purpose AI systems at every scale. Liquid Foundation Models (LFMs) are a new generation of generative AI models that achieve state-of-the-art performance at every scale, while maintaining a smaller memory...

View all jobs at Liquid AI

Apply now Apply later

Liquid AI, an MIT spin-off, is a foundation model company headquartered in Boston, Massachusetts. Our mission is to build capable and efficient general-purpose AI systems at every scale.
Our goal at Liquid is to build the most capable AI systems to solve problems at every scale, such that users can build, access, and control their AI solutions. This is to ensure that AI will get meaningfully, reliably and efficiently integrated at all enterprises. Long term, Liquid will create and deploy frontier-AI-powered solutions that are available to everyone.
We're looking for a Research Engineer / Scientist with a deep focus on Vision Language Models to join our Multimodal Foundation Model Training team. You will be at the heart of our efforts to train next-generation multimodal systems by driving innovation in model design, data processing, and large-scale training strategies for vision and vision-language tasks.
This is a highly technical role that combines cutting-edge machine learning research with systems-level thinking. You’ll work across the entire model lifecycle—from architecture design to dataset curation to training—and contribute to pushing the frontier of what Vision Language Models can achieve.

You’re a Great Fit If

  • You have experience with machine learning at scale.
  • You’re proficient in PyTorch, and familiar with distributed training frameworks like DeepSpeed, FSDP, or Megatron-LM.
  • You’ve worked with multimodal data (e.g., image-text, video, visual documents, audio).
  • You care deeply about empirical performance, and know how to design, run, and debug large-scale training experiments on distributed GPU clusters.
  • You understand how data quality, augmentations, and preprocessing pipelines can significantly impact model performance—and you’ve built tooling to support that.
  • You enjoy working in interdisciplinary teams across research, systems, and infrastructure, and can translate ideas into high-impact implementations.

What Sets You Apart

  • You’ve designed and trained Vision Language Models.
  • You’ve developed vision encoders or integrated them into language pretraining pipelines with autoregressive or generative objectives.
  • You’ve contributed to research papers, open-source projects, or production-grade multimodal model systems.
  • You have experience working with large-scale video or document datasets, understand the unique challenges they pose, and can manage massive datasets effectively.
  • You’ve built tools for data deduplication, image-text alignment, or vision tokenizer development.

Some of the Areas You'll Get To Work On

  • Investigate and prototype new model architectures that optimize inference speed, including on edge devices.
  • Lead or contribute to ablation studies and benchmark evaluations that inform architecture and data decisions.
  • Build and maintain evaluation suites for multimodal performance across a range of public and internal tasks.
  • Collaborate with the data and infrastructure teams to build scalable pipelines for ingesting and preprocessing large vision-language datasets.
  • Work with the infrastructure team to optimize model training across large-scale GPU clusters.
  • Contribute to publications, internal research documents, and thought leadership within the team and the broader ML community.
  • Collaborate with the applied research and business teams on client-specific use cases.

What You’ll Gain

  • A front-row seat in building some of the most capable Vision Language Models.
  • Access to world-class infrastructure, a fast-moving research team, and deep collaboration across ML, systems, and product.
  • The opportunity to shape multimodal foundation model research with both scientific rigor and real-world impact.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Architecture Autoregressive models Data quality FSDP GPU Machine Learning Model design Model training Open Source Pipelines PyTorch Research

Region: North America
Country: United States

More jobs like this