Software Engineer - ML Inference and Performance

San Francisco Bay Area

Baseten

Effortlessly serve optimized open source & custom models on the fastest, most reliable model delivery network

View all jobs at Baseten

Apply now Apply later

ABOUT BASETEN

We’re a growing team of builders backed by top-tier investors, including IVP, Spark Capital, Greylock, and Sarah Guo at Conviction. ML teams at enterprises and category-defining AI-native companies like Descript, Bland.ai, Patreon, Writer, and Robust Intelligence use Baseten to power their core production workloads with best-in-class performance, security, and reliability. While we’ve unlocked PMF and secured Series B funding, the ML infrastructure market is massive, and we’re just getting started. If you’re excited to work on engaging and relevant problems while building something new from the ground up, come join us!

THE ROLE

Are you passionate about advancing the frontiers of artificial intelligence? We are looking for a Senior Software Engineer focused on model performance to join our dynamic team. This role is ideal for someone who thrives in a fast-paced startup environment and is eager to make significant contributions to the exciting field of machine learning. If you are a backend engineer who thrives on making things faster and is excited about open source ML models, we look forward to your application.

RESPONSIBILITIES:

  • Implement, refine, and productionize cutting-edge techniques for ML model inference and infrastructure.

  • Deep dive into underlying codebases of TensorRT, PyTorch, Transformers, CUDA, and other libraries to debug ML performance issues.

  • Apply and scale optimization techniques across a wide range of ML models, particularly large language models.

  • Collaborate with a diverse team to design and implement innovative solutions.

  • Own projects from idea to production, including writing project specs and managing end-to-end feature implementation.

REQUIREMENTS:

  • Bachelor's, Master's, or Ph.D. degree in Computer Science, Engineering, Mathematics, or related field.

  • 3+ years of work professional work experience in a fast-paced, high-growth environment.

  • Experience with one or more general-purpose programming languages, such as Python, C++, or Go.

  • Deep understanding of software engineering principles and a proven track record of developing and deploying AI/ML inference solutions.

  • Strong familiarity with ML libraries, especially PyTorch, TensorRT, or TensorRT-LLM.

  • Demonstrated interest and experience in machine learning and large language models.

  • Deep understanding of GPU architecture.

  • Experience with Docker and Kubernetes.

  • Experience and interest in growing and leading a team is highly valued.

BONUS POINTS:

  • Proficiency in enhancing the performance of software systems, particularly in the context of large language models (LLMs).

  • Familiarity with LLM optimization techniques (e.g., quantization, speculative decoding, continuous batching).

  • Experience with CUDA or similar technologies.

BENEFITS:

  • Competitive compensation package (Unlimited PTO, 401k, covered healthcare premiums).

  • A unique opportunity to be part of a rapidly growing startup in one of the most exciting engineering fields of our era.

  • An inclusive and supportive work culture that fosters learning and growth.

  • Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.

Apply Now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.

At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.

Apply now Apply later
Job stats:  1  1  0

Tags: Architecture Computer Science CUDA Docker Engineering GPU Kubernetes LLMs Machine Learning Mathematics ML infrastructure ML models Model inference Open Source Python PyTorch Security Spark TensorRT Transformers

Perks/benefits: Career development Competitive pay Startup environment Unlimited paid time off

Region: North America
Country: United States

More jobs like this

Explore more career opportunities

Find even more open roles below ordered by popularity of job title or skills/products/technologies used.