ML Systems Engineer

San Francisco HQ

Genmo, Inc.

Genmo is a frontier AI lab developing the best open video generation models. Create any video, possible or impossible, using Mochi 1.

View all jobs at Genmo, Inc.

Apply now Apply later

We are Genmo, a research lab dedicated to building open, state-of-the-art models for video generation towards unlocking the right brain of AGI. Join us in shaping the future of AI and pushing the boundaries of what's possible in video generation.

The Role
You'll own our model serving layer, implementing high-performance inference systems that can handle millions of requests daily. You'll work at the intersection of ML frameworks and cloud infrastructure, building automated pipelines for model optimization and deployment. Your work will directly impact the performance and scalability of our video generation models, ensuring sub-second latency at global scale.


Key Responsibilities

  • Design and implement high-performance model serving infrastructure supporting streaming, batching, and multi-modal inputs

  • Build automated model compilation and optimization pipelines using TensorRT, torch.compile, and other compilers

  • Optimize serving systems for throughput, latency, and GPU utilization across our H100 fleet

  • Develop monitoring and observability for model-specific metrics (quality, latency, throughput)

  • Collaborate with researchers to transition models from development to production

  • Implement A/B testing, canary deployments, and gradual rollout strategies for models

  • Integrate serving layer with platform infrastructure (load balancers, API gateways, queue systems)

Qualifications

  • Bachelor's or Master's degree in Computer Science or related field

  • 4+ years ML engineering experience with 2+ years focused on model serving

  • Production experience with high-performance model serving frameworks (vLLM, SGLang, TensorRT-LLM, or similar)

  • Strong Python proficiency and PyTorch experience

  • Experience with model compilation and optimization (TensorRT, ONNX, quantization)

  • Track record of building inference systems at scale (10K+ QPS)

  • Understanding of attention mechanisms and transformer architectures

  • Experience with containerized deployment and orchestration

We Value

  • Contributions to open-source serving frameworks

  • Experience with continuous batching and advanced serving optimizations

  • Knowledge of GPU architecture and memory management

  • Background at companies with large-scale ML serving

  • Experience with streaming/iterative generation patterns

Genmo is an Equal Opportunity Employer. Candidates are evaluated without regard to age, race, color, religion, sex, disability, national origin, sexual orientation, veteran status, or any other characteristic protected by federal or state law. Genmo, Inc. is an E-Verify company and you may review the Notice of E-Verify Participation and the Right to Work posters in English and Spanish.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0

Tags: A/B testing AGI APIs Architecture Computer Science Engineering GPU LLMs Machine Learning ONNX Open Source Pipelines Python PyTorch Research Streaming TensorRT Testing vLLM

Region: North America
Country: United States

More jobs like this