Founding Engineer - ML

San Francisco

Pear VC

We’re seed specialists that partner with founders at the earliest stages to turn great ideas into category-defining companies.

View all jobs at Pear VC

Apply now Apply later

About Us:

Mustafa and Varun met at Harvard, where they both did research in the intersection of computation and evaluations. Between them, they have authored multiple published papers in the machine learning domain and hold numerous patents and awards. Drawing on their experiences as tech leads at Snowflake and Lyft, they founded NomadicML to solve a critical industry challenge: bridging the performance gap between model development and production deployment.

At NomadicML, we leverage advanced techniques—such as retrieval-augmented generation, adaptive fine-tuning, and compute-accelerated inference—to significantly improve machine learning models in domains like video generation, healthcare, and autonomous systems. Backed by Pear VC and BAG VC, early investors in Doordash, Affinity, and other top Silicon Valley companies, we’re committed to building cutting-edge infrastructure that helps teams realize the full potential of their ML deployments.

About the Role:

As a Founding Machine Learning Engineer, you will shape the next generation of continuously improving AI systems, blending cutting-edge research with practical implementation. You’ll design, implement, and refine Retrieval-Augmented Generation (RAG) pipelines, enabling our models to adapt in real-time to changing data and user needs. This will involve working with text, video, and other high-dimensional inputs, as well as exploring advanced embeddings, vector databases, and GPU-accelerated infrastructures. You’ll apply statistical rigor—using significance testing, distributional checks, and other quantitative methods—to determine precisely when and how to retune models, ensuring that updates are timely yet never arbitrary.

Beyond the core ML tasks, you’ll also be a key contributor to our research initiatives. You’ll evaluate and experiment with new model architectures, foundational models, and emerging techniques in large-scale machine learning and optimization. As part of the full-stack experience, you’ll work closely with the other team members to build intuitive front-end interfaces, dashboards, and APIs. These tools will enable rapid iteration, real-time monitoring, and easy configuration of models and pipelines, making it possible for both technical and non-technical stakeholders to guide model evolution effectively.

Key Responsibilities:

  • Research, prototype, and integrate new model architectures and foundational models into our pipeline.

  • Develop and maintain real-time RAG workflows, ensuring efficient adaptation to new text, video, and streaming data sources.

  • Implement statistical methods to determine when models need retuning, leveraging metrics, significance tests, and distributional analyses.

  • Collaborate with Software Engineers to build front-end interfaces and dashboards for monitoring performance and triggering model updates.

  • Continuously refine embeddings, vector databases, and model architectures to drive improved accuracy, latency, and stability.

Must Haves:

  • Strong Proficiency in Python 

  • Deep understanding of ML model development (e.g., LLMs, embedding techniques)

  • Experience with Retrieval-Augmented Generation (RAG) pipelines, fine tuning APIs, and similar ML workflows.

  • Strong statistical background for evaluating model performance 

Nice to Haves:

  • Proficiency in frameworks like PyTorch or TensorFlow

  • Knowledge of vector databases, embedding stores, and scalable ML serving platforms

  • Experience with CI/CD tools and ML workflow management (MLflow, Kubeflow)

  • Prior research background (publications, patents) in ML, especially in foundational models or large-scale adaptation techniques

What We Offer:

  • Competitive compensation and equity

  • Apple Equipment

  • Health, dental, and vision insurance.

  • Opportunity to build foundational machine learning infrastructure from scratch and influence the product’s technical trajectory.

  • Primarily in-person at our San Francisco office with hybrid flexibility.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: APIs Architecture CI/CD Core ML GPU Kubeflow LLMs Machine Learning MLFlow ML infrastructure ML models Pipelines Python PyTorch RAG Research Snowflake Statistics Streaming TensorFlow Testing

Perks/benefits: Competitive pay Equity / stock options Health care

Regions: Remote/Anywhere North America
Country: United States

More jobs like this