Pod Software Engineer

Cupertino

Etched

Transformers etched into silicon. By burning the transformer architecture into our chips, we're creating the world's most powerful servers for transformer inference.

View all jobs at Etched

Apply now Apply later

Job Summary:

We are seeking highly motivated and skilled Pod Software Engineers to join our System Software team. This team plays a critical role in developing, qualifying, and optimizing high-performance networking solutions for large-scale inference workloads. As a Pod Software Engineer, you will focus on developing and qualifying software that drives communication amongst Sohu inference nodes in multi-rack inference clusters. You will collaborate closely with kernel, platform, and telemetry teams to push the boundaries of peer-to-peer RDMA efficiency.

Key Responsibilities:

  • High Performance Peer to Peer Networking: Design, develop, and implement RDMA based networking peering, supporting high bandwidth, low latency communication across PCIe nodes within and across racks. Includes work across Operating System, kernel drivers, embedded software and system software.

  • Test Development: Develop tests that qualify host processors (x86),. NICs, TORs and device network interfaces for high performance.

  • Burn-in integration: Furnish burn-in teams with tests that represent both real-world use cases and workloads for device to device networking, and extreme-load stress testing. 

  • Performance/Health Telemetry Design: Define the key metrics that system software must collect to maintain high availability and performance under extreme communications workloads.

Representative Projects:

  • Analyze performance deviations, optimize network stack configurations, and propose kernel tuning parameters for low-latency, high-bandwidth inference workloads.

  • Design and execute automated qualification tests for RDMA NICs and interconnects across various server configurations.

  • Identify and root-cause firmware, driver, and hardware issues that impact RDMA performance and reliability.

  • Collaborate with ODMs and silicon vendors to validate new RDMA features and enhancements.

  • Implement and validate peer RDMA support for GPU-to-GPU and accelerator-to-accelerator communication.

  • Modify kernel drivers and user-space libraries to optimize direct memory access between inference pods.

  • Profile and benchmark inter-node RDMA latency and bandwidth to improve inference job scaling.

  • Optimize NIC and switch configurations to balance throughput, congestion control, and reliability.

Must-Have Skills and Experience:

  • Proficiency in C/C++

  • Proficiency in at least one scripting language (e.g., Python, Bash, Go).

  • Strong experience with device-to-device networking technologies (RDMA, GPUDirect, etc.), including RoCE.

  • Experience with zero-copy networking, RDMA verbs and memory registration.

  • Familiarity with queue pairs, completions queues, and transport types.

  • Strong understanding of operating systems (Linux preferred) and server hardware architectures.

  • Ability to analyze complex technical problems and provide effective solutions.

  • Excellent communication and collaboration skills.   

  • Ability to work independently and as part of a team.

  • Experience with version control systems (e.g., Git).   

  • Experience with reading and interpreting hardware logs.

Nice-to-Have Skills and Experience:

  • Experience with networking technologies like NVLink, Infiniband, ML Pod interconnects.

  • Experience with widely deployed Top of Rack Switches (Cisco, Juniper, Arista, etc.)

  • Knowledge of server virtualization.

  • Experience with tracing tools like perf, eBPF, ftrace, etc.

  • Experience with performance testing and benchmarking tools (gProf, vTune, Wireshark, etc.).

  • Familiarity with hardware diagnostic tools and techniques 

  • Experience with containerization technologies (e.g., Docker, Kubernetes).

  • Experience with CI/CD pipelines.

  • Experience with Rust.

Ideal Background:

  • Candidates who have worked on GPU or TPU pods, specifically in the networking domain.

  • Candidates who understand up-time challenges of very big ML deployments.

  • Candidates who have actively debugged complex network topologies, specifically dealing with cases of node dropouts/failures, route-arounds, and pod resiliency at large.

  • Candidates must understand performance implications of Pod Networking SW.

    Benefits

    • Full medical, dental, and vision packages, with 100% of premium covered

    • Housing subsidy of $2,000/month for those living within walking distance of the office

    • Daily lunch and dinner in our office

    • Relocation support for those moving to West San Jose

    How we’re different

    Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

    We are a fully in-person team in West San Jose, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Architecture CI/CD Docker Engineering Git GPU InfiniBand Kubernetes Linux Machine Learning NVLink Pipelines Python Research Rust Testing

Perks/benefits: Health care Relocation support

Region: North America
Country: United States

More jobs like this