Founding Engineer - Full Stack

San Francisco

Pear VC

We’re seed specialists that partner with founders at the earliest stages to turn great ideas into category-defining companies.

View all jobs at Pear VC

Apply now Apply later

About Us:

Mustafa and Varun met at Harvard, where they both did research in the intersection of computation and evaluations. Between them, they have authored multiple published papers in the machine learning domain and hold numerous patents and awards. Drawing on their experiences as tech leads at Snowflake and Lyft, they founded NomadicML to solve a critical industry challenge: bridging the performance gap between model development and production deployment.

At NomadicML, we leverage advanced techniques—such as retrieval-augmented generation, adaptive fine-tuning, and compute-accelerated inference—to significantly improve machine learning models in domains like video generation, healthcare, and autonomous systems. Backed by Pear VC and BAG VC, early investors in Doordash, Affinity, and other top Silicon Valley companies, we’re committed to building cutting-edge infrastructure that helps teams realize the full potential of their ML deployments.

About the Role:

As a Founding Software Engineer, you will build and maintain the end-to-end infrastructure that makes our real-time, continuously adapting ML platform possible. You’ll architect and optimize our data ingestion pipelines—integrating Kafka and Flink for streaming—as well as robust APIs that facilitate seamless communication between front-end interfaces, ML pipelines, and underlying storage systems. By establishing strong observability practices, CI/CD tooling, and highly scalable backend services, you’ll ensure that our platform can handle dynamic loads and growing complexity without sacrificing latency or reliability.

You’ll also collaborate on research-driven experimentation. Working closely with our team, you’ll support the rapid evaluation of new models and techniques. Your backend and full-stack capabilities will create an environment where novel ML approaches can be seamlessly tested, integrated, and iterated upon. Whether it’s spinning up GPU-accelerated instances for fast inference, fine-tuning backend APIs for new embedding strategies, or streamlining data flows for model comparison experiments, your role will be pivotal in turning research insights into production-ready features.

Key Responsibilities:

  • Design and implement scalable ingestion pipelines using Kafka and Flink to handle real-time text, video, and metadata streams.

  • Build and maintain backend APIs that interface smoothly with ML components, front-end dashboards, and storage layers.

  • Integrate observability and CI/CD practices to enable quick iteration, safe rollouts, and immediate feedback loops.

  • Support the research and experimentation of new ML models, ensuring that backend services and APIs can adapt rapidly to novel requirements.

  • Collaborate with ML Engineers to ensure that infrastructure, tooling, and workflows accelerate model evolution and performance tuning.

Must Haves:

  • Strong programming skills (Python / Javascript), and experience building backend APIs and services

  • Prior experience setting up CI/CD pipelines for ML integration

  • Understanding of ML workflow management and scaling model serving infrastructure

Nice to Haves:

  • Familiarity with IaC (Docker, Kubernetes, Terraform) and observability tools (Grafana, Prometheus)

  • Experience integrating with GPU-accelerated platforms for low-latency inference

  • Familiarity with vector databases, embedding stores, and ML serving frameworks

  • Proficiency with distributed systems and streaming platforms (Apache, Confluent)

What We Offer:

  • Competitive compensation and equity

  • Apple Equipment

  • Health, dental, and vision insurance.

  • Opportunity to build foundational machine learning infrastructure from scratch and influence the product’s technical trajectory.

  • Primarily in-person at our San Francisco office with hybrid flexibility.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: APIs CI/CD Distributed Systems Docker Flink GPU Grafana JavaScript Kafka Kubernetes Machine Learning ML infrastructure ML models Pipelines Python RAG Research Snowflake Streaming Terraform

Perks/benefits: Competitive pay Equity / stock options Health care

Regions: Remote/Anywhere North America
Country: United States

More jobs like this