Principal Engineer, AI Inference Reliability
Cerebras SystemsRemote Office; Sunnyvale CA or Toronto CanadaPosted 1 March 2026
Job Description
<div class="content-intro"><p><span data-contrast="none">Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. </span><span data-ccp-props="{"134233117":false,"134233118":false,"201341983":0,"335559685":0,"335559737":240,"335559738":240,"335559739":240,"335559740":279}"> </span></p>
<p>Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. <a href="https://openai.com/index/cerebras-partnership/">OpenAI recently announced a multi-year partnership with Cerebras</a>, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. </p>
<p>Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.</p></div><p><span data-contrast="auto">In late 2024, we launched Cerebras Inference, the fastest Generative AI inference service in the world, over 10 times faster than GPU-based hyperscale cloud inference. Since launch, we’ve scaled to meet the surging demand from AI labs, enterprises, and a thriving developer community.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">In October 2025, we announced our series G funding, raising $1.1 billion USD to accelerate the expansion of our products and services to meet global AI demand.</span><span data-ccp-props="{}"> </span></p>
<p><strong><span data-contrast="auto">About the team</span></strong><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">The Cerebras Inference team’s mission is to deliver the world’s most performant, secure, and reliable enterprise-grade AI service. We build and operate large-scale distributed systems that power AI inference at unprecedented speed and efficiency. Join us to help scale inference and accelerate AI.</span><span data-ccp-props="{}"> </span></p>
<p><strong><span data-contrast="auto">About the role</span></strong><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">We’re looking for a hands-on Reliability Tech Lead (IC) to own the mission of making Cerebras Inference the most reliable AI service in the world. You will drive reliability strategy and execution across our inference stack, from client SDKs and public-cloud multi-region deployments to wafer-scale systems in specialized data centers.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">In this role, you will define SLOs and incident-response frameworks, design and implement reliability mechanisms at scale, and partner across hundreds of engineers to ensure our service meets world-class reliability standards.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">If you are passionate about building and operating massive-scale, low-latency, high-reliability distributed systems, we want to hear from you.</span><span data-ccp-props="{}"> </span></p>
<p><strong><span data-contrast="auto">Responsibilities:</span></strong><span data-ccp-props="{}"> </span></p>
<ul>
<li data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{"335552541":1,"335559683":0,"335559684":-2,"335559685":720,"335559991":360,"469769226":"Symbol","469769242":[8226],"469777803":"left","469777804":"","469777815":"hybridMultilevel"}" data-aria-posinset="1" data-aria-level="1"><span data-contr ... (truncated, view full listing at source)
Apply Now
Direct link to company career page
More jobs at Cerebras Systems
See all →More Python jobs
See all →AI Engineer- Gen AI/SWE- Weights & Biases
Weights and Biases · Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA / Remove - US
AI Customer Support Engineer, Tier I - Weights & Biases
Weights and Biases · Sunnyvale, CA
AI Customer Support Engineer, Tier I - W&B EMEA
Weights and Biases · London, England
Analytics Engineer - Weights & Biases
Weights and Biases · San Francisco, CA / Remote - US