Kernel Engineer

Cerebras Systems
Sunnyvale CA or Toronto CanadaPosted 1 March 2026

Job Description

<div class="content-intro"><p><span data-contrast="none">Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. </span><span data-ccp-props="{"134233117":false,"134233118":false,"201341983":0,"335559685":0,"335559737":240,"335559738":240,"335559739":240,"335559740":279}"> </span></p> <p>Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. <a href="https://openai.com/index/cerebras-partnership/">OpenAI recently announced a multi-year partnership with Cerebras</a>, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. </p> <p>Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.</p></div><h4 id="the-role"><strong>About The Role</strong></h4> <div class="wnVEW"> <div class="Yy5or GNqVo allowTextSelection OuGoX"> <div class="BIZfh"> <div lang="en-US"> <div id="x_mail-editor-reference-message-container"> <div> <p><span data-olk-copy-source="MessageBody">As a Kernel Engineer on our team, you will develop high-performance software solutions at the intersection of hardware and software, developing high-performance software for cutting-edge AI and HPC workloads. Your focus will be on implementing, optimizing, and scaling deep learning operations to fully leverage our custom, massively parallel processor architecture.</span></p> <p>You will be part of a world-class team responsible for the design, performance tuning, and validation of foundational ML and HPC kernels. This includes building a library of parallel and distributed algorithms that maximize compute utilization and push the boundaries of training efficiency for state-of-the-art AI models. Your work will be critical to unlocking the full potential of our hardware and accelerating the pace of AI innovation.</p> </div> <div> <p><strong>Responsibilities</strong></p> </div> <ul type="disc"> <li>Develop design specifications for new machine learning and linear algebra kernels and mapping to the Cerebras WSE System using various parallel programming algorithms.</li> <li>Develop and debug kernel library of highly optimized low level assembly instruction and C-like domain specific language routines to implement algorithms targeting the Cerebras hardware system.</li> <li>Develop and debug high-performance kernel routines in low-level assembly and a custom C-like (CSL) language, implementing algorithms optimized for the Cerebras hardware system.</li> <li>Using mathematical models and analysis to measure the software performance and inform design decisions.</li> <li>Develop and integrate unit and system testing methodologies to verify correct functionality and performance of kernel libraries.</li> <li>Study emerging trends in Machine Learning applications and help evolve Kernel library architecture to address computational challenges of the start-of-the-art Neural Networks.</li> <li>Interact with chip and system architects to optimize instruction sets, microarchitecture, and IO of next generation systems.</li> </ul> </div> </div> </div> </div> </div> <div class="wnVEW"> <div> <div class="T31hC GNqVo allowTextSelection OuGoX"> <div class="BIZfh"> <div lang="en-US"> <div id="x_mail-editor-reference-message-container"> <div id="x_x_skills-qualifications ... (truncated, view full listing at source)