Head of Systems & Performance Verification
San Jose
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Etched
Transformers etched into silicon. By burning the transformer architecture into our chips, we're creating the world's most powerful servers for transformer inference.About Etched
Etched is building the world’s first AI inference chip purpose-built for transformers, delivering over 10x the performance of NVIDIA GPUs. But that’s just the beginning. Our broader vision is to completely rethink the chip development lifecycle for a post-Moore world—enabling faster, more efficient custom silicon development than ever before. Backed by hundreds of millions from top investors, our team includes industry legends like Brian Loiler (who built products driving 80% of NVIDIA’s revenue), David Munday (who built Google’s TPU v1–v5 software and firmware stack), Mark Ross (former Cypress CTO), and Ajat Hukkoo (renowned Broadcom and Intel design exec). Etched is redefining the infrastructure layer for the fastest growing industry in history.
About the Role
We’re looking for a Head of Systems & Performance Verification to lead the front-end digital verification of one of the most complex AI SoCs ever built. You’ll manage a team focused on pre-silicon functional verification using UVM and other constrained-random methodologies, ensuring complete coverage of system-level interactions, performance features, and inter-chiplet behavior. This is a leadership role focused on building infrastructure, methodology, and execution to catch complex bugs at the RTL level—before they reach silicon. You’ll work cross-functionally with RTL, architecture, firmware, and silicon validation teams to ensure robust first-pass silicon.
Key responsibilities
Lead the development and execution of the SoC-level design verification strategy, with a focus on high-performance AI architectures
Build and manage a team of DV engineers responsible for UVM-based testbenches, stimulus generation, checkers, and coverage closure
Develop detailed test plans covering functional correctness, system-level scenarios, and performance-critical paths
Partner with RTL and architecture teams to review specs, identify corner cases, and define verification targets
Debug and root-cause complex failures across RTL, firmware, and runtime environments
Drive the development of reusable, scalable verification infrastructure, including simulation, emulation, and hybrid environments
Track verification progress against coverage, bug discovery, and milestone criteria
Establish and refine best practices for IP- and SoC-level verification, simulation bring-up, and testbench reuse
Contribute to post-silicon debug and performance correlation as needed
You may be a good fit if you have
12+ years of experience in ASIC or SoC design verification, including 5+ years in leadership roles
Deep expertise in UVM and SystemVerilog, with a strong understanding of coverage-driven verification and testbench architecture
Proven track record of first-pass silicon success on complex SoCs
Familiarity with modern compute architectures and interconnect protocols (e.g., PCIe, CXL, AXI, NoC)
Experience debugging system-level bugs that span hardware and firmware
Strong understanding of performance-critical validation, including interactions between compute, memory, and interconnect subsystems
Excellent communication and cross-functional collaboration skills
Strong candidates may also have experience with
Experience with AI/ML accelerators, GPU-like architectures
Exposure to emulation platforms, waveform analysis, and hybrid verification environments
Background in architectural modeling or performance simulation
Scripting experience in Python, Tcl, or shell to support automation and test generation
Benefits
Full medical, dental, and vision packages, with 100% of premium covered
Housing subsidy of $2,000/month for those living within walking distance of the office
Daily lunch and dinner in our office
Relocation support for those moving to Cupertino
How we’re different
Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.
We are a fully in-person team in Cupertino, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture Engineering GPU Machine Learning Python Research Transformers
Perks/benefits: Health care Relocation support
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.