Inference Software Engineer - Collectives
San Jose
Full Time Senior-level / Expert USD 175K - 275K
Etched
Transformers etched into silicon. By burning the transformer architecture into our chips, we're creating the world's most powerful servers for transformer inference.About Etched
Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents.
Job Summary
Etched’s Inference SW team enables optimal mapping of models to Sohu’s dataflow architecture and serving requests across multiple chips, hosts and racks. We are seeking a highly skilled and motivated engineer to formalize and optimize our collectives (e.g. Send/Recieve, AllReduce, Broadcast, etc.). You’ll build SW enabling frontier inference performance to satisfy exponentially growing serving demand.
In this role, your core focus will be working across systems and research to realize Mixture of Expert (MoE) architectures on Sohu’s system. You will play a key role in scaling out Sohu’s nascent runtime, with a focus on collectives.
Key responsibilities
Formalize and optimize our collectives (e.g. Send/Recieve, AllReduce, Broadcast, etc.)
Collaborate across systems and research teams to bring MoE architectures to Sohu’s runtime
Optimize expert routing and communication layers using Sohu’s collectives
Contribute to scaling and enhancing Sohu’s runtime, including multi-node inference, intra-node execution, state management, and robust error handling
Develop tools for performance profiling and debugging, identifying bottlenecks and correctness issues
You may be a good fit if you have
Strong proficiency in Rust and/or C++; familiarity with PyTorch and/or JAX.
Experience designing/optimizing collectives (e.g. NCCL, MPI collectives, XLA collectives, etc.)
Strong systems knowledge, including Linux internals, accelerator architectures (e.g., GPUs, TPUs), high-speed interconnects (e.g., NVLink, InfiniBand) and RDMA
Solid understanding of distributed systems concepts, algorithms, and challenges, including consensus protocols, consistency models, and communication patterns
Experience analyzing performance traces and logs from distributed systems and ML workloads.
A knack for designing user-facing interfaces and libraries, and enjoy looking for that elusive optimum between performance and usability.
Strong candidates may also have experience with
Large language model architectures, particularly Mixture-of-Experts (MoE).
Familiarity with network simulation techniques
Developed low-latency, high-performance applications using both kernel-level and user-space networking stacks.
Ported applications to non-standard or accelerator hardware platforms.
Contributed to runtime systems with complex, well-documented interfaces, such as distributed storage systems or machine learning runtimes.
Built applications with extensive SIMD (Single Instruction, Multiple Data) optimizations for performance-critical paths.
Benefits
Full medical, dental, and vision packages, with generous premium coverage
Housing subsidy of $2,000/month for those living within walking distance of the office
Daily lunch and dinner in our office
Relocation support for those moving to West San Jose
How we’re different
Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.
We are a fully in-person team in West San Jose, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.
Tags: Architecture Dataflow Distributed Systems Engineering InfiniBand JAX Linux LLMs Machine Learning NVLink PyTorch Research Rust SIMD Transformers
Perks/benefits: Career development Health care Relocation support
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.