Senior MLOps Engineer, GenAI Framework

US, CA, Santa Clara

NVIDIA

NVIDIA erfindet den Grafikprozessor und fördert Fortschritte in den Bereichen KI, HPC, Gaming, kreatives Design, autonome Fahrzeuge und Robotik.

View all jobs at NVIDIA

Apply now Apply later

NVIDIA is looking for a dedicated and motivated senior build and continuous integration (CI/CD) engineer for its GenAI Frameworks (NeMo, Megatron Core) team. NVIDIA NeMo is an open-source, scalable and cloud-native framework built for researchers and developers working on Large Language Models (LLM), Multimodal (MM), and Speech AI. NeMo provides end-to-end model training, including data curation, alignment, customization, evaluation, deployment and tooling to optimize performance and user experience. Building upon modern DevOps tools, your work will enable GenAI framework software engineers and deep learning algorithm engineers to work efficiently with a wide variety of deep learning algorithms and software stacks as they vigilantly seek out opportunities for performance optimization and continuously deliver high quality software.

Does the idea of pushing the boundaries of state-of-the-art research and development excite you? Are you interested in getting exposure to the entire DL SW stack? Then come join our technically diverse team of DL algorithm engineers and performance optimization specialists to unlock unprecedented deep learning performance in every domain.

What you’ll be doing:

  • Architect and lead the build-release continuous integration processes of our Generative AI framework and libraries related to NeMo framework and Megatron Core.

  • Propose, implement, and deploy efficient and scalable DevOps solutions to allow our fast growing team to release software more frequently while maintaining high-quality and top performance

  • Work with industry standard tools (Kubernetes, Docker, Slurm, Ansible, GitLab, GitHub Actions, Jenkins, Artifactory, Jira)

  • Assist with cluster operations and system administration (managing: servers, team accounts, clusters)

  • Automate away recurring tasks (DL algorithm accuracy and performance regression detection, designing and developing new quality control measures, e.g. code analysis) while employing and advancing best-practices

  • Work closely with DL framework and libraries (CUDA, cuDNN, cuBLAS) team and with other relevant teams within NVIDIA that provide software build, testing, and release related infrastructure

What we need to see:

  • BS or MS degree in Computer Science, Computer Architecture or related technical field or equivalent experience

  • 5+ years of industry experience in infrastructure engineering, DevOps.

  • Strong system level programming in languages like Python and shell scripting.

  • Strong understanding of build/release systems, CI/CD and experience with solutions like Gitlab, Github, Jenkins etc.

  • Experience with Linux system administration

  • Proficient with containerization and cluster management technologies like Docker and Kubernetes.

  • Experience in build tools, including Make, Cmake.

  • Experience using or deploying source code management (SCM) solutions such as GitLab, GitHub, Perforce, etc.

  • Excellent problem-solving and debugging skills.

  • Great teammate who can collaborate and influence in a dynamic environment with excellent interpersonal and written communication skills.

Ways to stand out from the crowd:

  • Previous experience with GPU accelerated systems.

  • Hands on experience with DL frameworks (PyTorch, JAX, Tensorflow)

  • Cluster/cloud technologies, e.g.: SLURM, Lustre, k8s

  • Experience with HPC hardware systems such as compute clusters and HPC software performance benchmarking on such systems.

The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

Tags: Ansible Architecture CI/CD CMake Computer Science CUDA cuDNN Deep Learning DevOps Docker Engineering Generative AI GitHub GitLab GPU HPC JAX Jenkins Jira Kubernetes Linux LLMs MLOps Model training Open Source Python PyTorch Research Shell scripting TensorFlow Testing

Perks/benefits: Career development Equity / stock options

Region: North America
Country: United States

More jobs like this