Senior MLOps Engineer, GenAI Framework
US, CA, Santa Clara, United States
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
NVIDIA
NVIDIA on grafiikkasuorittimen keksijä, jonka kehittämät edistysaskeleet vievät eteenpäin tekoälyn, suurteholaskennan.NVIDIA is looking for a dedicated and motivated senior build and continuous integration (CI/CD) engineer for its GenAI Frameworks (Megatron-LM and NeMo Framework) team. Megatron-LM and NeMo Framework are open-source, scalable and cloud-native frameworks built for researchers and developers working on Large Language Models (LLM), Multimodal (MM), and Video Generation. Megatron-LM and NeMo Framework provide end-to-end model training, including data curation, alignment, customization, evaluation, deployment and tooling to optimize performance and user experience. Building upon the latest DevOps tools, your work will enable GenAI framework software engineers, deep learning algorithm engineers, and research scientists to work efficiently with a wide variety of deep learning algorithms and software stacks as they vigilantly seek out opportunities for performance optimization and continuously deliver high quality software.
Does the idea of pushing the boundaries of innovative research and development excite you? Are you interested in getting exposure to the entire DL SW stack? Then join our technically diverse team of DL algorithm engineers and performance optimization specialists to unlock unprecedented deep learning performance in every domain.
What you’ll be doing:
Architect and manage the continuous integration pipelines and release processes of our Generative AI framework and libraries related to Megatron-LM and NeMo Framework.
Design and implement efficient and scalable DevOps solutions to allow our fast growing team to release software more frequently while maintaining high-quality and maximum performance.
Work with industry standard tools (Kubernetes, Docker, Slurm, Ansible, GitLab, GitHub Actions, Jenkins, Artifactory, Jira) in hybrid on-premise and cloud environments.
Assist with cluster operations and system administration (managing: servers, team accounts, clusters).
Accelerate research and development cycles by automating recurring tasks such as accuracy and performance regression detection.
Developing new quality control measures, e.g. code analysis, backwards compatibility, and regression testing, while employing and advancing best-practices.
Work closely with DL frameworks and libraries (CUDA, cuDNN, cuBLAS, and PyTorch) teams and with other engineering teams within NVIDIA that provide software, testing, and release related infrastructure.
What we need to see:
BS or MS degree in Computer Science, Computer Architecture or related technical field (or equivalent experience) and 6+ years of industry experience in DevOps and infrastructure engineering.
Strong system level programming in languages like Python and shell scripting.
Extensive understanding of build/release systems, CI/CD and experience with solutions like Gitlab, Github, Jenkins etc.
Experience with Linux system administration.
Proficient with containerization and cluster management technologies like Docker and Kubernetes.
Experience in build tools, including Make, Cmake.
A strong background in source code management (SCM) solutions such as GitLab, GitHub, Perforce, etc.
Well-versed problem-solving and debugging skills.
Great teammate who can collaborate and influence others in a dynamic environment.
Excellent interpersonal and written communication skills.
Ways to stand out from the crowd:
Proven-track record with GPU accelerated systems at scale.
Well-versed in DL frameworks such as PyTorch, Jax, or TensorFlow.
Expertise in cluster and cloud compute technologies, e.g.: SLURM, Lustre, k8s
Software and hardware Benchmarking on high-performance computing systems.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.Tags: Ansible Architecture CI/CD CMake Computer Science CUDA cuDNN Deep Learning DevOps Docker Engineering Generative AI GitHub GitLab GPU JAX Jenkins Jira Kubernetes Linux LLMs MLOps Model training Open Source Pipelines Python PyTorch Research Shell scripting TensorFlow Testing
Perks/benefits: Career development Equity / stock options
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.