Deployment Engineer, AI Inference

Cerebras Systems
Sunnyvale CA or Toronto CanadaPosted 1 March 2026

Job Description

<div class="content-intro"><p><span data-contrast="none">Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. </span><span data-ccp-props="{"134233117":false,"134233118":false,"201341983":0,"335559685":0,"335559737":240,"335559738":240,"335559739":240,"335559740":279}"> </span></p> <p>Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. <a href="https://openai.com/index/cerebras-partnership/">OpenAI recently announced a multi-year partnership with Cerebras</a>, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. </p> <p>Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.</p></div><p><strong><span data-contrast="none">About The Role</span></strong><span data-ccp-props="{"134233117":false,"134233118":false,"335551550":0,"335551620":0,"335557856":16777215,"335559738":240,"335559739":240}"> </span></p> <p><span data-contrast="none">We are seeking a highly skilled Deployment Engineer to build and operate our cutting-edge inference clusters. These clusters would provide the candidate an opportunity to work with the world's largest computer chip, the Wafer-Scale Engine (WSE), and the systems that harness its unparalleled power. </span><span data-ccp-props="{"134233117":false,"134233118":false,"335551550":0,"335551620":0,"335557856":16777215,"335559738":240,"335559739":240}"> </span></p> <p><span data-contrast="none">You will play a critical role in ensuring reliable, efficient, and scalable deployment of AI inference workloads across our global infrastructure. On the operational side, you’ll own the rollout of the new software versions and AI replica updates, along the capacity reallocations across our custom-built, high-capacity datacenters.</span> <br> <br><span data-contrast="none">Beyond operations, you’ll drive improvements to our telemetry, observability and the fully automated pipeline. This role involves working with advanced allocation strategies to maximize utilization of large-scale computer fleets. </span><span data-ccp-props="{"134233117":false,"134233118":false,"335551550":0,"335551620":0,"335557856":16777215,"335559738":240,"335559739":240}"> </span></p> <p><span data-contrast="none">The ideal candidate combines hands-on operation rigor with strong systems engineering skills and thrives on building resilient pipelines that keep pace with cutting-edge AI models.</span><span data-ccp-props="{"134233117":false,"134233118":false,"335551550":0,"335551620":0,"335557856":16777215,"335559738":240,"335559739":240}"> </span></p> <p><strong><span data-contrast="none">This role does not require 24/7 hour on-call rotations.</span></strong><span data-ccp-props="{"134233117":false,"134233118":false,"335551550":0,"335551620":0,"335557856":16777215,"335559738":240,"335559739":240}"> </span></p> <p> <br> <strong><span data-contrast="none"><span data-ccp-parastyle="heading 4">Responsibilities</span></span></strong><span data-ccp-props="{"134233117":false,"134233118":false,"134245418":true,"134245529":true,"335551550":0,"335551620":0,"335557856":16777215,"335559738":319,"335559739":319}"> </span></p> <ul> <li data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{"33555254 ... (truncated, view full listing at source)