Senior Site Reliability Engineer, HPC and LSF

US, NC, Durham, United States

NVIDIA

NVIDIA erfindet den Grafikprozessor und fördert Fortschritte in den Bereichen KI, HPC, Gaming, kreatives Design, autonome Fahrzeuge und Robotik.

View all jobs at NVIDIA

Apply now Apply later

NVIDIA is the leader in AI, machine learning and datacenter acceleration. NVIDIA is expanding that leadership into datacenter networking with ethernet switches, NICs and DPUs NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life’s work, to amplify human imagination and intelligence. Make the choice, join our diverse team today!
 

As a member of the Hardware Infrastructure Farm team, you will provide leadership in the design and implementation of ground breaking compute clusters that powers all silicon development across NVIDIA. We seek an expert to build and operate these clusters at high reliability, efficiency, and performance and drive foundational improvements and automation to improve engineer's productivity. As a Site Reliability Engineer, you are responsible for the big picture of how our systems relate to each other, we use a breadth of tools and approaches to tackle a broad spectrum of problems. Practices such as limiting time spent on reactive operational work, blameless postmortems and proactive identification of potential outages factor into iterative improvement that is key to both product quality and interesting dynamic day-to-day work. SRE's culture of diversity, intellectual curiosity, problem solving and openness is important to our success. Our organization brings together people with a wide variety of backgrounds, experiences and perspectives. We encourage them to collaborate, think big and take risks in a blame-free environment. We promote self-direction to work on meaningful projects, while we also strive to build an environment that provides the support and mentorship needed to learn and grow.


What you’ll be doing:

  • Manage and support workload and resource schedulers in a large-scale HPC environment.

  • Automate Everything: Develop automation scripts to automate deployment, configuration management, and operational monitoring.

  • Develop solutions for complex computing resource management requirements.

  • Extract and leverage grid performance metrics for troubleshooting and performance optimization.

  • Troubleshoot Complex Issues: Perform comprehensive troubleshooting from bare metal to application level, ensuring system reliability and efficiency.

  • Develop, define and document standard methodologies to share with internal teams.

  • Collaborate with domain experts to improve how our chip development process utilizes our infrastructure.

  • Directly contribute to the overall quality and improve time to market for our next generation chips.


What we need to see:

  • Extensive knowledge with job scheduler administration (e.g. IBM Spectrum LSF or SLURM).

  • Proficient in administering Centos/RHEL Linux distributions.

  • In depth understating of container technologies like Docker.

  • Proficiency in UNIX scripting languages and Python.

  • Excellent problem-solving skills, with the ability to analyze complex systems, identify bottlenecks, and implement scalable solutions.

  • Excellent communication and teamwork skills, with the ability to work effectively with diverse teams and individuals.

  • 10+ years experience in a large, distributed Linux environment.

  • BS in Computer Science, similar degree or equivalent experience.


Ways to stand out from the crowd:

  • Experience analyzing and tuning performance for a variety of HPC or EDA workloads.

  • Solid understanding of cluster configuration managements tools such as Ansible.

  • Proficiency in Perl for maintaining legacy automation scripts.

  • Deep understanding of distributed system principles.

The base salary range is 184,000 USD - 287,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Apply now Apply later
Job stats:  0  0  0

Tags: Ansible Computer Science Deep Learning Docker EDA GPU HPC Linux Machine Learning Perl Python

Perks/benefits: Career development Equity / stock options

Region: North America
Country: United States

More jobs like this