DL Communications Collectives SW Engineer
(US&UK) Santa Clara CA , Austin TX, Portland OR, or Fort Collins CO, Cambridge UK
We are working on software to improve the Deep Learning ecosystem and help hardware engineers build great Deep Learning parallel systems.We are looking for a strong candidate with a background in writing systems software for networking devices (and optionally Linux kernel networking stack or network drivers). Someone who's implemented network protocols or has worked on OpenMPI.This role involves designing and implementing highly optimized communication collectives libraries similar to UCC (Unified Collective Communication) and NCCL (NVIDIA Collective Communications Library). The ideal candidate will work closely with hardware and software teams to ensure efficient data communication and synchronization across multiple AI accelerators in a distributed system, enabling scalable deep learning and high-performance computing applications.You will be learning technical and organizational skills from industry veterans: how to write performant and readable code; how to structure and communicate projects, ideas, and progress; how to work effectively with the Open Source community.We are big proponents of Open Source and Free software and contribute back our improvements to all the great projects we use.
Responsibilities
- Build-up communication components of an AI Software Stack
- Port AI Software to run on a new H/W platform
- Profiling and tuning of communications within AI applications
- Design, develop, and optimize communication collectives (e.g., AllReduce, AllGather, Broadcast, ReduceScatter) for large-scale distributed computing and machine learning frameworks.
- Implement and optimize communication algorithms (ring, tree, butterfly, etc.) tailored for our architectures and multi-node clusters.
- Ensure low-latency, high-bandwidth communication across multi-GPU setups, supporting interconnects such as PCIe and Infiniband.
- Collaborate with hardware engineers and other software teams to optimize performance.
- Implement fault tolerance and scalability mechanisms in distributed systems to handle large-scale workloads.
- Write unit tests and benchmark tools to validate the performance and correctness of collective operations.
- Stay current with advancements in hardware and networking technologies to continuously improve the library's performance.
Requirements
- Strong understanding of GPU architectures (CUDA, AMD ROCm) and experience in GPU programming (CUDA, HIP, or similar).
- Proficiency in designing and implementing parallel and distributed algorithms, particularly communication collectives.
- Experience with network interconnects (NVLink, PCIe, Infiniband, RDMA) and understanding of their performance implications.
- Hands-on experience with communication collectives libraries like UCC, NCCL, or MPI.
- Strong knowledge of concurrency, synchronization, and memory consistency models in multi-threaded and distributed environments.
- Experience with profiling and optimizing low-level performance (memory bandwidth, latency, throughput) on GPU architectures.
- Familiarity with deep learning frameworks (TensorFlow, PyTorch, etc.) and their use of communication collectives.
- Strong problem-solving skills and ability to work in a fast-paced, collaborative environment.
- Network driver experience recommended
- Excellent skills in problem solving, written and verbal communication
- Strong organization skills, and highly self-motivated.
- Ability to work well in a team and be productive under aggressive schedules.
Optional Requirements
- Experience with NumPy, PyTorch, TensorFlow or JAX
- Experience with Rust
- Experience with CUDA, OpenCL, OpenGL, or SYCL
- Coursework or experience with Machine Learning algorithms
Education and Experience
- Bachelor’s, Master’s, or PhD in Computer Engineering, Software Engineering or Computer Science
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Job stats:
4
0
0
Category:
Engineering Jobs
Tags: Architecture Computer Science CUDA Deep Learning Distributed Systems Engineering GPU InfiniBand JAX Linux Machine Learning NumPy NVLink Open Source PhD PyTorch Rust TensorFlow
Regions:
Europe
North America
Countries:
United Kingdom
United States
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.
Staff Machine Learning Engineer jobsData Scientist II jobsPrincipal Data Engineer jobsStaff Data Scientist jobsBI Developer jobsData Manager jobsJunior Data Analyst jobsResearch Scientist jobsData Science Manager jobsBusiness Data Analyst jobsLead Data Analyst jobsData Engineer III jobsSenior AI Engineer jobsData Specialist jobsData Science Intern jobsSr. Data Scientist jobsPrincipal Software Engineer jobsData Analyst Intern jobsAzure Data Engineer jobsSoftware Engineer II jobsData Analyst II jobsBI Analyst jobsSoftware Engineer, Machine Learning jobsJunior Data Engineer jobsSenior Data Scientist, Performance Marketing jobs
Snowflake jobsLinux jobsEconomics jobsOpen Source jobsBanking jobsHadoop jobsJavaScript jobsComputer Vision jobsRDBMS jobsPhysics jobsKafka jobsData Warehousing jobsMLOps jobsAirflow jobsNoSQL jobsKPIs jobsR&D jobsGoogle Cloud jobsScala jobsOracle jobsData warehouse jobsStreaming jobsClassification jobsPostgreSQL jobsGitHub jobs
Scikit-learn jobsSAS jobsCX jobsTerraform jobsScrum jobsPandas jobsPySpark jobsData Mining jobsDistributed Systems jobsRobotics jobsIndustrial jobsBigQuery jobsLooker jobsJira jobsUnstructured data jobsRedshift jobsJenkins jobsE-commerce jobsdbt jobsReact jobsMicroservices jobsPharma jobsData strategy jobsMySQL jobsNumPy jobs