LLM Inference Engineer

Palo Alto

Hippocratic AI

The First Safety Focused LLM for Healthcare

View all jobs at Hippocratic AI

Apply now Apply later

About Us:

Hippocratic AI is building safety-focused large language model (LLM) for the healthcare industry. Our team comprised of ex-researchers from Microsoft, Meta, Nvidia, Apple, Stanford, John Hopkins and HuggingFace are reinventing the next generation of foundation model training and alignment to create AI-powered conversational agents for real time patient-AI interactions.

About the Role

We're seeking an experienced LLM Inference Engineer to optimize our large language model (LLM) serving infrastructure. The ideal candidate has:

  • Extensive hands-on experience with state-of-the-art inference optimization techniques

  • A track record of deploying efficient, scalable LLM systems in production environments

Key Responsibilities

  • Design and implement multi-node serving architectures for distributed LLM inference

  • Optimize multi-LoRA serving systems

  • Apply advanced quantization techniques (FP4/FP6) to reduce model footprint while preserving quality

  • Implement speculative decoding and other latency optimization strategies

  • Develop disaggregated serving solutions with optimized caching strategies for prefill and decoding phases

  • Continuously benchmark and improve system performance across various deployment scenarios and GPU types

Required Qualifications

  • 2+ years of experience optimizing LLM inference systems at scale

  • Proven expertise with distributed serving architectures for large language models

  • Hands-on experience implementing quantization techniques for transformer models

  • Strong understanding of modern inference optimization methods, including:

    • Speculative decoding techniques with draft models

    • Eagle speculative decoding approaches

  • Proficiency in Python and C++

  • Experience with CUDA programming and GPU optimization (familiarity required, expert-level not necessary)

Preferred Qualifications

  • Contributions to open-source inference frameworks such as vLLM, SGLang, or TensorRT-LLM

  • Experience with custom CUDA kernels

  • Track record of deploying inference systems in production environments

  • Deep understanding of performance optimization systems

Why Join Us?

Our team is pushing the boundaries of what's possible with LLM deployment. If you're passionate about making state-of-the-art language models more efficient and accessible, we'd love to hear from you!

Why Join Our Team

  • Innovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale.

  • Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA.

  • Strategic Investors: We have raised a total of $278 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems.

  • World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes.

For more information, visit www.HippocraticAI.com.

We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA unless explicitly noted otherwise in the job description

References


1. Polaris: A Safety-focused LLM Constellation Architecture for Healthcare, https://arxiv.org/abs/2403.13313
2
. Polaris 2: https://www.hippocraticai.com/polaris2
3
. Personalized Interactions: https://www.hippocraticai.com/personalized-interactions
4
. Human Touch in AI: https://www.hippocraticai.com/the-human-touch-in-ai
5
. Empathetic Intelligence: https://www.hippocraticai.com/empathetic-intelligence
6
. Polaris 1: https://www.hippocraticai.com/research/polaris
7
. Research and clinical blogs: https://www.hippocraticai.com/research

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Architecture CUDA GPU HuggingFace LLMs LoRA Model training Open Source Python Research TensorRT vLLM

Region: North America
Country: United States

More jobs like this