Software Engineer, SystemML - Scaling / Performance
Menlo Park, CA
Meta
Giving people the power to build community and bring the world closer together
In this role, you will be a member of the Network.AI Software team and part of the bigger DC networking organization. The team develops and owns the software stack around NCCL (NVIDIA Collective Communications Library), which enables multi-GPU and multi-node data communication through HPC-style collectives. NCCL has been integrated into PyTorch and is on the critical path of multi-GPU distributed training. In other words, nearly every distributed GPU-based ML workload in Meta Production goes through the SW stack the team owns.
At the high level, the team aims to enable Meta-wide ML products and innovations to leverage our large-scale GPU training and inference fleet through an observable, reliable and high-performance distributed AI/GPU communication stack. Currently, one of the team’s focus is on building customized features, SW benchmarks, performance tuners and SW stacks around NCCL and PyTorch to improve the full-stack distributed ML reliability and performance (e.g. Large-Scale GenAI/LLM training) from the trainer down to the inter-GPU and network communication layer. And we are seeking for engineers to work on the space of GenAI/LLM scaling reliability and performance.Software Engineer, SystemML - Scaling / Performance Responsibilities
Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate, monthly rate, or annual salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base compensation, Meta offers benefits. Learn more about benefits at Meta.
At the high level, the team aims to enable Meta-wide ML products and innovations to leverage our large-scale GPU training and inference fleet through an observable, reliable and high-performance distributed AI/GPU communication stack. Currently, one of the team’s focus is on building customized features, SW benchmarks, performance tuners and SW stacks around NCCL and PyTorch to improve the full-stack distributed ML reliability and performance (e.g. Large-Scale GenAI/LLM training) from the trainer down to the inter-GPU and network communication layer. And we are seeking for engineers to work on the space of GenAI/LLM scaling reliability and performance.Software Engineer, SystemML - Scaling / Performance Responsibilities
- Enabling reliable and highly scalable distributed ML training on Meta's large-scale GPU training infra with a focus on GenAI/LLM scaling
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.
- Specialized experience in one or more of the following machine learning/deep learning domains: Distributed ML Training, GPU architecture, ML systems, AI infrastructure, high performance computing, performance optimizations, or Machine Learning frameworks (e.g. PyTorch).
- PhD in Computer Science, Computer Engineering, or relevant technical field
- Experience with NCCL and distributed GPU reliability/performance improvment on RoCE/Infiniband
- Experience working with DL frameworks like PyTorch, Caffe2 or TensorFlow
- Experience with both data parallel and model parallel training, such as Distributed Data Parallel, Fully Sharded Data Parallel (FSDP), Tensor Parallel, and Pipeline Parallel
- Experience in AI framework and trainer development on accelerating large-scale distributed deep learning models
- Experience in HPC and parallel computing
- Knowledge of GPU architectures and CUDA programming
- Knowledge of ML, deep learning and LLM
Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate, monthly rate, or annual salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base compensation, Meta offers benefits. Learn more about benefits at Meta.
Job stats:
3
0
0
Categories:
Engineering Jobs
Machine Learning Jobs
Tags: Architecture Computer Science CUDA DDP Deep Learning Engineering FSDP Generative AI GPU HPC InfiniBand LLMs Machine Learning ML infrastructure PhD Physics PyTorch TensorFlow VR
Perks/benefits: Career development Equity / stock options Health care Salary bonus
Region:
North America
Country:
United States
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.
Principal Data Scientist jobsBI Developer jobsData Scientist II jobsStaff Data Scientist jobsPrincipal Data Engineer jobsData Manager jobsJunior Data Analyst jobsData Science Manager jobsResearch Scientist jobsBusiness Data Analyst jobsLead Data Analyst jobsSenior AI Engineer jobsData Engineer III jobsSr. Data Scientist jobsData Science Intern jobsData Specialist jobsJunior Data Engineer jobsSenior Data Scientist, Performance Marketing jobsSoftware Engineer, Machine Learning jobsData Analyst Intern jobsSr Data Engineer jobsBI Analyst jobsSoftware Engineer II jobsData Analyst II jobsData Engineering Manager jobs
Snowflake jobsLinux jobsEconomics jobsHadoop jobsJavaScript jobsOpen Source jobsPhysics jobsComputer Vision jobsMLOps jobsAirflow jobsKafka jobsRDBMS jobsBanking jobsNoSQL jobsGoogle Cloud jobsData Warehousing jobsScala jobsR&D jobsKPIs jobsData warehouse jobsGitHub jobsScikit-learn jobsOracle jobsPostgreSQL jobsCX jobs
Classification jobsStreaming jobsSAS jobsTerraform jobsLooker jobsScrum jobsDistributed Systems jobsPandas jobsData Mining jobsPySpark jobsBigQuery jobsRobotics jobsJenkins jobsJira jobsIndustrial jobsRedshift jobsReact jobsdbt jobsUnstructured data jobsMicroservices jobsData strategy jobsE-commerce jobsMySQL jobsMatlab jobsNumPy jobs