Sr. Deployment Engineer, AI Inference
Cerebras SystemsRemote Office; Sunnyvale, CA; Toronto, Ontario, CanadaPosted 1 March 2026
Job Description
<div class="content-intro"><p><span data-contrast="none">Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. </span><span data-ccp-props="{"134233117":false,"134233118":false,"201341983":0,"335559685":0,"335559737":240,"335559738":240,"335559739":240,"335559740":279}"> </span></p>
<p>Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. <a href="https://openai.com/index/cerebras-partnership/">OpenAI recently announced a multi-year partnership with Cerebras</a>, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. </p>
<p>Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.</p></div><p><strong><span data-contrast="none">About Us</span></strong></p>
<p><span data-contrast="none">Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. </span><span data-ccp-props="{"134233117":false,"134233118":false,"335551550":0,"335551620":0,"335557856":16777215,"335559738":240,"335559739":240}"> </span></p>
<p><span data-contrast="none">Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In 2024, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.</span><span data-ccp-props="{"134233117":false,"134233118":false,"335551550":0,"335551620":0,"335557856":16777215,"335559738":240,"335559739":240}"> </span></p>
<p><strong><span data-contrast="none">About The Role</span></strong><span data-ccp-props="{"134233117":false,"134233118":false,"335551550":0,"335551620":0,"335557856":16777215,"335559738":240,"335559739":240}"> </span></p>
<p><span data-contrast="none">We are seeking a highly skilled and experienced Sr. Deployment Engineer to build and operate our cutting-edge inference clusters. These clusters would provide the candidate an opportunity to work with the world's largest computer chip, the Wafer-Scale Engine (WSE), and the systems that harness its unparalleled power. </span><span data-ccp-props="{"134233117":false,"134233118":false,"335551550":0,"335551620":0,"335557856":16777215,"335559738":240,"335559739":240}"> </span></p>
<p><span data-contrast="none">You will play a critical role in ensuring reliable, efficient, and scalable deployment of AI inference workloads across our global infrastructure. On the operational side, you’ll own the rollout of the new software versions and AI replica updates, along the capacity reallocations across our custom-built, high-capacity datacenters.</span> <br> <br><span data-contrast="none">Beyond operations, yo ... (truncated, view full listing at source)
Apply Now
Direct link to company career page
More jobs at Cerebras Systems
See all →More Python jobs
See all →AI Engineer- Gen AI/SWE- Weights & Biases
Weights and Biases · Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA / Remove - US
AI Customer Support Engineer, Tier I - Weights & Biases
Weights and Biases · Sunnyvale, CA
AI Customer Support Engineer, Tier I - W&B EMEA
Weights and Biases · London, England
Analytics Engineer - Weights & Biases
Weights and Biases · San Francisco, CA / Remote - US