Head of Inference Kernels

Etched
San Jose$200k – $300kPosted 27 March 2026

Job Description

Head of Inference Kernels About Etched Etched is building the world’s first AI inference system purpose-built for transformers - delivering over 10x higher performance and dramatically lower cost and latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Backed by hundreds of millions from top-tier investors and staffed by leading engineers, Etched is redefining the infrastructure layer for the fastest growing industry in history. Job Summary As a core member of the team, you will play a pivotal role in leading a high-performing team to build a suite of optimized kernels and implement highly optimized inference stacks for a variety of state-of-the-art transformer models (e.g., Llama-3, Llama-4, Deepseek-R1, Qwen-3, Stable Diffusion-3 etc.). You will be responsible for managing and scaling a high-performance team to pioneer novel model mapping strategies, while co-designing inference time algorithms (e.g., speculative and parallel decoding, prefill-decode disaggregation etc.). Key responsibilities - Architect Best-in-Class Inference Performance on Sohu: Deliver continuous batching throughput exceeding B200 by ≥10x on priority workloads - Develop Best-in-Performance Inference Mega Kernels: Develop complex, fused kernels (including basics like reordering and fusing, but also more complex work involving simultaneous computation and transmission of intermediate values for sequential matmuls) that increase chip utilization and reduce inference latency, and validate these optimizations through benchmarking and regression-tested in production pipelines. - Architect Model Mapping Strategies: Develop system level optimizations using a mix of techniques such tensor parallelism and expert parallelism for optimal performance. - Hardware-Software Co-design of Inference-time Algorithmic Innovation: Develop and deploy production-ready inference-time algorithmic improvements (e.g., speculative decoding, prefill-decode disaggregation, KV cache offloading) - Build Scalable Team and Roadmap: Grow and retain a team of high-performing inference optimization engineers. - Cross-Functional Performance Alignment: Ensure inference stack and performance goals are aligned with the software infrastructure teams (e.g., runtime, and scheduling support), GTM (e.g., latency SLAs, workload targets) and hardware teams (e.g., instruction design, memory bandwidth) for future generations of our hardware. Representative projects - Develop optimized kernels for multi-head latent attention on Sohu - Develop optimization strategies to optimally hide compute and communication in mixture-of-expert layers - Organize the team to deliver production ready forward pass implementations of new state-of-the-art models within 2-weeks of their release. Build infrastructure to be able to build this in <1 week in the future. You may be a good fit if you have - Experience in designing and optimizing GPU kernels for deep learning on GPUs using CUDA, and assembly (ASM). You should have experience with low-level programming to maximize performance for AI operations, leveraging tools like Compute Kernel (CK), CUTLASS, and Triton for multi-GPU and multi-platform performance. - Deep fluency with transformer inference architecture, optimization levers, and full-stack systems (e.g., vLLM, custom runtimes). History of delivering tangible perf wins on GPU hardware or custom AI accelerators. - Have solid understanding of roofline models of compute throughput, memory bandwidth and interconnect performance. - Experienced in running large-scale workloads on heterogeneous compute clusters, optimizing for efficiency and scalability of AI workloads. - Scopes projects crisply, sets aggressive but realistic milestones, and drives technical decision-making across the team. Anticipates blockers and shifts resources proactively. Strong can ... (truncated, view full listing at source)
Apply Now

Direct link to company career page

AI Resume Fit Check

See exactly which skills you match and which are missing before you apply. Free, instant, no spam.

Check my resume fit

Free · No credit card

Share