Member of Technical Staff, Research Engineer / Research Scientist (Inference)

Palo Alto, CA

Inflection

It’s simple. We train and tune it. You own it. Let's do enterprise AI right.

View all jobs at Inflection

Apply now Apply later

Inflection AI is a public benefit corporation leveraging our world class large language model to build the first AI platform focused on the needs of the enterprise. 

Who we are:

Inflection AI was re-founded in March of 2024 and our leadership team has assembled a team of kind, innovative, and collaborative individuals focused on building enterprise AI solutions. We are an organization passionate about what we are building, enjoy working together and strive to hire people with diverse backgrounds and experience. 

Our first product, Pi, provides an empathetic and conversational chatbot. Pi is a public instance of building from our 350B+ frontier model with our sophisticated fine-tuning (10M+ examples), inference, and orchestration platform. We are now focusing on building new systems that directly support the needs of enterprise customers using this same approach.

Want to work with us? Have questions? Learn more below.

About the Role

As a Member of Technical Staff, Research Engineer on our Inference team, you will be essential to the real-time performance and reliability of our AI systems. Your role is pivotal in optimizing inference pipelines, reducing latency, and translating cutting-edge research into enterprise-ready applications.

This is a good role for you if you:

  • Have extensive experience deploying and optimizing large-scale language models for real-time inference.
  • Are skilled with performance-enhancing tools and frameworks such as ONNX, TensorRT, or TVM.
  • Thrive in fast-paced environments where real-world application performance is paramount.
  • Understand the intricate trade-offs between model accuracy, latency, and scalability.
  • Are passionate about delivering robust, efficient, and scalable inference solutions that drive our enterprise success.

Responsibilities include:

  • Optimizing inference pipelines to maximize model performance and minimize latency in production environments.
  • Collaborating with ML researchers and engineers to deploy inference solutions that meet rigorous enterprise standards.
  • Integrating and refining tools to streamline the transition from research prototypes to production-ready systems.
  • Continuously monitoring and tuning system performance with real-world data to drive improvements.
  • Pioneering innovations in model inference that are critical to the success of our AI platform.

Employee Pay Disclosures

At Inflection AI, we aim to attract and retain the best employees and compensate them in a way that appropriately and fairly values their individual contributions to the company. For this role, Inflection AI estimates a starting annual base salary will fall in the range of approximately $175,000 - $350,000 depending on experience. This estimate can vary based on the factors described above, so the actual starting annual base salary may be above or below this range.

Interview Process

Apply: Please apply on Linkedin or our website for a specific role.

After speaking with one of our recruiters, you’ll enter our structured interview process, which includes the following stages:

  1. Hiring Manager Conversation – An initial discussion with the hiring manager to assess fit and alignment.
  2. Technical Interview – A deep dive with an Inflection Engineer to evaluate your technical expertise.
  3. Onsite Interview – A comprehensive assessment, including:
    • domain-specific interview
    • system design interview
    • A final conversation with the hiring manager

Depending on the role, we may also ask you to complete a take-home exercise or deliver a presentation.

For non-technical roles, be prepared for a role-specific interview, such as a portfolio review.

Decision Timeline
We aim to provide feedback within one week of your final interview.

 

Apply now Apply later

Tags: Chatbots LLMs Machine Learning Model inference ONNX Pipelines Research TensorRT

Region: North America
Country: United States

More jobs like this