System Software Engineer, LLM Inference and Performance Optimization
US, CA, Santa Clara
NVIDIA
NVIDIA erfindet den Grafikprozessor und fördert Fortschritte in den Bereichen KI, HPC, Gaming, kreatives Design, autonome Fahrzeuge und Robotik.As a System Software Engineer (LLM Inference & Performance Optimization) you will be at the heart of our AI advancements. Our team is dedicated to pushing the boundaries of machine learning and optimizing large language models (LLMs) for flawless, real-time performance across diverse hardware platforms. This is your chance to contribute to world-class solutions that impact the future of technology.
What you'll be doing:
Design, implement, and optimize inference logic for fine-tuned LLMs, working closely with Machine Learning Engineers.
Develop efficient, low-latency glue logic and inference pipelines scalable across various hardware platforms, ensuring outstanding performance and minimal resource usage.
Apply hardware accelerators such as GPUs, and other specialized hardware to improve inference speed and effectively implement real-world applications.
Collaborate with cross-functional teams to integrate models seamlessly into diverse environments, meeting strict functional and performance requirements.
Conduct detailed performance analysis and optimization for specific hardware platforms, focusing on efficiency, latency, and power consumption.
What we need to see:
8+ years of expert proficiency in C++ with a deep understanding of memory management, concurrency, and low-level optimizations.
M.S. or higher degree (or equivalent experience) in Computer Science/Engineering and related field.
Strong experience in system-level software engineering, including multi-threading, data parallelism, and performance tuning.
Validated expertise in LLM inference, with experience in model serving frameworks like ONNX Runtime, TensorRT.
Familiarity with real-time systems and performance-tuning techniques, especially for machine learning inference pipelines.
Ability to work collaboratively with Machine Learning Engineers and cross-functional teams to align system-level optimizations with model goals.
Extensive understanding of hardware architectures and the ability to bring to bear specialized hardware for optimized ML model inference.
Ways to stand out from the crowd:
Experience with deep learning hardware accelerators, such as Nvidia GPUs.
Familiarity with ONNX, TensorRT, or cuDNN for LLM inference on GPU.
Experience with low-latency optimizations and real-time system constraints for ML inference.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.Tags: Architecture Computer Science cuDNN Deep Learning Engineering GPU LLMs Machine Learning Model inference ONNX Pipelines TensorRT
Perks/benefits: Career development Equity / stock options
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.