Senior Software Engineer, TensorRT Inference

US, CA, Santa Clara, United States

NVIDIA

NVIDIA on grafiikkasuorittimen keksijä, jonka kehittämät edistysaskeleet vievät eteenpäin tekoälyn, suurteholaskennan.

View all jobs at NVIDIA

Apply now Apply later

We are now looking for a Senior Software Engineer for our TensorRT Inference team!

At NVIDIA, we're at the forefront of innovation, driving advancements in AI and machine learning to solve some of the world’s most challenging problems. We're seeking talented and motivated engineers to join our TensorRT team in developing the industry-leading deep learning inference software for NVIDIA AI accelerators. 

What you’ll be doing:

As a Senior Software Engineer in the TensorRT team, you will be responsible for designing and implementing inference software optimizations to power AI applications on NVIDIA GPUs. If you're ready to take on challenging projects and make a significant impact in a company that values creativity, excellence, and collaboration, we want to hear from you! Key responsibilities include:

  • Design, develop and optimize NVIDIA TensorRT to achieve tightly coordinated and responsive inference applications for datacenter, workstations, and PCs.

  • Develop software in C++, Python, and CUDA to enable seamless and efficient deployment of state-of-the-art LLM and Generative AI models.

  • Collaborate with deep learning experts and GPU architects throughout the company to influence Hardware and Software strategy for inference.

What we need to see:

  • BS, MS, PhD or equivalent experience in Computer Science, Computer Engineering or a related field.

  • 8+ years of software development experience on a large codebase or project.

  • Strong proficiency in C++ and Python programming languages.

  • Experience with development of: Deep Learning Frameworks, Compilers, or System Software.

  • Foundational knowledge of Machine Learning techniques, or GPU optimizations.

  • Excellent problem-solving skills and the ability to learn and work effectively in a fast-paced, collaborative environment.

  • Strong communication skills and the ability to articulate complex technical concepts.

Ways to stand out from the crowd:

  • Background in developing inference backends and compilers for GPUs.

  • Knowledge of GPU programming and optimizations using CUDA or OpenCL.

  • Experience working with LLM inference frameworks like TRT-LLM, vLLM, SGLang.

  • Experience working with deep learning frameworks like TensorRT, PyTorch, JAX.

  • Knowledge of CUDA performance analysis, optimization techniques, and tools.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative, autonomous and love a challenge, we want to hear from you. Come, join our team and help build the real-time, cost-effective computing platform driving our success in this exciting and quickly growing field.

#LI-Hybrid

The base salary range is 184,000 USD - 356,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Apply now Apply later
Job stats:  1  0  0
Category: Engineering Jobs

Tags: Computer Science CUDA Deep Learning Engineering Generative AI GPU JAX LLMs Machine Learning PhD Python PyTorch TensorRT vLLM

Perks/benefits: Career development Equity / stock options

Region: North America
Country: United States

More jobs like this