System Software Engineer - AI

India, Pune

NVIDIA

NVIDIA erfindet den Grafikprozessor und fördert Fortschritte in den Bereichen KI, HPC, Gaming, kreatives Design, autonome Fahrzeuge und Robotik.

View all jobs at NVIDIA

Apply now Apply later

VIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI - the new era of computing, positioning GPUs as the driving force behind intelligent applications in productivity, gaming, and creative fields, solidifying NVIDIA's position as the leading "AI computing company." There is a growing emphasis on processing AI computations at the edge, closer to the source of data. This approach reduces latency, enhances real-time processing, and addresses privacy concerns by minimizing the need for sending data to the centralized servers. As technology continues to advance, we can expect client-side AI (local execution) to play a pivotal role in shaping the digital landscape. 

The Windows AI team (WinAI) is currently in search of a Senior Systems Software Engineer who is passionate about solving the challenges linked to the client-side AI on Windows PCs, navigating complexities such as limited compute and memory resources, highly enthusiastic to perform in-depth analysis of AI models and chipping in to the open source, and ensuring optimal execution of training and inference workloads across locally available devices (like GPUs, and NPUs). 

What You’ll Be Doing:

  • Partnering with NVIDIA software, research, architecture and product teams, aligning strategies and technical needs for fostering the ecosystem of AI on a Windows RTX PC. 

  • Perform in-depth analysis and optimization of AI models, data processing pipelines, and inference backends to ensure the best performance on current and next-generation GPU architectures. 

  • Identify, research and implement compute and memory optimizations techniques, perform competitive analysis and work with various training and inference frameworks teams to incorporate these optimizations in the various training and inference backends.  

  • Collaborate with open source and ISV developers working on GenAI (like large language models, stable diffusion etc.) and develop reference projects and libraries using various backends like tensorrt-llm that would enable developers to run these products natively on windows on GPU with optimal performance  

  • Fine-tune AI models, use various compression techniques such as quantization, distillation and pruning to fit the models to user's windows edge devices and enhance the performance of inferencing engines. 

  • Collaborate with Microsoft to drive the advancements in APIs, AI frameworks, and platforms for developing and deploying AI inferencing applications. 

  • Ensure the effective deployment of directed tests through collaboration with the automation team, thereby ensuring the robustness of automated testing. 

 

What We Need To See:

  • Bachelor's, Master's, or PhD in Computer Science, Software Engineering, Mathematics, or a related field (or equivalent experience). 

  • Excellent C++ programming and debugging skills with a strong understanding of data structures and algorithms. 

  • 4+ years of shown experience with proficiency in AI inferencing pipelines and applications using ML/DL frameworks like ONNX RT, DirectML, PyTorch, Tensor RT etc. 

  • Strong analytical and problem-solving abilities, with the capacity to multitask effectively in a dynamic environment. 

  • Outstanding written and oral communication skills, enabling effective collaboration with management and engineering teams. 

 

Ways To Stand Out from The Crowd:

  • Understanding of modern techniques in Machine Learning, Deep Neural Networks, and Generative AI with relevant contributions to major open-source projects will be a plus. 

  • Consistent track record of delivering end-to-end products with geographically distributed teams in multinational product companies. 

  • Proficiency in lower-level system/GPU programming, CUDA, developing high performance systems 

  • Hands on experience with building applications using APIs like ONNX RT, DirectML, DirectX, PyTorch, TensorRT, Vulkan 

 

We are an equal-opportunity employer and value diversity at our company. With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to unprecedented growth, our exclusive engineering teams are rapidly growing. If you're a creative and autonomous engineer with a real passion for technology, we want to hear from you. 

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0

Tags: APIs Architecture Computer Science CUDA Deep Learning Engineering Generative AI GPU LLMs Machine Learning Mathematics ONNX Open Source PhD Pipelines Privacy PyTorch Research Stable Diffusion TensorRT Testing Vulkan

Perks/benefits: Career development Startup environment

Region: Asia/Pacific
Country: India

More jobs like this