Member of Technical Staff - Vision-Language Model Data

San Francisco

Liquid AI

We build capable and efficient general-purpose AI systems at every scale. Liquid Foundation Models (LFMs) are a new generation of generative AI models that achieve state-of-the-art performance at every scale, while maintaining a smaller memory...

View all jobs at Liquid AI

Apply now Apply later

Liquid AI, an MIT spin-off, is a foundation model company headquartered in Boston, Massachusetts. Our mission is to build capable and efficient general-purpose AI systems at every scale.
Our goal at Liquid is to build the most capable AI systems to solve problems at every scale, such that users can build, access, and control their AI solutions. This is to ensure that AI will get meaningfully, reliably and efficiently integrated at all enterprises. Long term, Liquid will create and deploy frontier-AI-powered solutions that are available to everyone.
We are seeking a highly skilled Member of Technical Staff - Vision-Language Model Data to play a critical role in the development of Liquid Vision-Language models. This role focuses on gathering high-quality vision-language midtraining and SFT datasets.

Key Responsibilities

  • Create and maintain data processing, cleaning, filtering, and selection pipeline that can handle image-text data.
  • Watch out for the release of public high quality VLM datasets.
  • Create and maintain synthetic data augmentation pipeline to enhance VLM data quality.
  • Work with the multimodal vision team to run ablations on new dataset.

Required Qualifications

  • Experience Level: B.S. + 5 years experience or M.S. + 3 years experience or Ph.D. + 1 year of experience.
  • Dataset Engineering: Expertise in data curation, cleaning, augmentation, and synthetic data generation techniques.
  • Machine Learning Expertise: Ability to write and debug models in popular ML frameworks, and experience working with LLMs and VLMs.
  • Software Development: Strong programming skills in Python, with an emphasis on writing clean, maintainable, and scalable code.

Preferred Qualifications

  • M.S. or Ph.D. in Computer Science, Electrical Engineering, Math, or a related field.
  • Experience fine-tuning or customizing LLMs and VLMs.
  • 2+ years working in computer vision.
  • First-author publications in top ML or vision conferences (e.g. NeurIPS, ICML, ICLR, CVPR, ICCV).
  • Contributions to popular open-source projects.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Computer Science Computer Vision Data quality Engineering ICLR ICML LLMs Machine Learning Mathematics NeurIPS Open Source Python

Perks/benefits: Conferences

Region: North America
Country: United States

More jobs like this