Research Scientist - Multimodal Language Models

Palo Alto, California

Luma AI

Ideate, visualize, create videos, and share your dreams with the world, using our most powerful image and video AI models.

View all jobs at Luma AI

Apply now Apply later

Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision and audio. So, we are working on training and scaling up multimodal foundation models for systems that can see, hear and understand, show and explain, and eventually interact with our world to effect change.


We are looking for researchers with significant experience solving hard problems in multimodal language models. You will work end–to–end on cutting edge multimodal language models with strong emphasis on audio and visual data. Your contributions will be pivotal in shaping various research projects and product roadmaps.

Responsibilities

  • Design and implement novel AI algorithms and architectures for multimodal language models.

  • Build tools to evaluate and benchmark multimodal language models.

  • Develop large-scale AI training and inference methods.

  • Ensure efficient implementation of models & systems for data processing and training.

  • Build tools to analyze and process multimodal data.

  • Collaborate with research and engineering teams across Luma to transfer research to products and services.

  • Implement cutting-edge product prototypes based on multimodal generative AI.

Experience

  • Expertise in Python & Pytorch, including practical experience working with the full development pipeline from data processing & data loading to training, inference, and optimization.

  • Experience working with large-scale text data, or (bonus) interleaved data spanning audio, video, image, and/or text.

  • Hands-on experience in developing or benchmarking at least one of the following topics: LLMs, Vision Language Models, Audio Language Models, generative video models .

Compensation

  • The pay range for this position in California is $200,000 - $300,000yr; however, base pay offered may vary depending on job-related knowledge, skills, candidate location, and experience. We also offer competitive equity packages in the form of stock options and a comprehensive benefits plan. 

Your application is reviewed by real people.

At Luma AI, we believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.

We will deploy these systems to make a new kind of intelligent creative partner that can imagine with us. Free and away from the pressure of being creative. It's for all of us whose imaginations have been constrained, who've had to channel vivid dreams through broken words, hoping others will see what we see in our mind's eye. A partner that can help us show — not just tell.

Dream Machine is an early step to building that. Try it here

Why you should join us:

  • Luma is bringing together the best team in the world to achieve our goal, from researchers to engineers and designers to growth operators

  • Luma is not just a lab - we are deeply product focused and our vision merging AI models and delightful products is unique in the industry

  • We build. We ship. Our early products have been wildly successful

Apply now Apply later
Job stats:  0  0  0

Tags: Architecture Engineering Generative AI LLMs Python PyTorch Research

Perks/benefits: Competitive pay Equity / stock options Salary bonus

Region: North America
Country: United States

More jobs like this