AI Models, Product Manager

Cerebras Systems
Sunnyvale, CAPosted 1 March 2026

Job Description

<div class="content-intro"><p><span data-contrast="none">Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. </span><span data-ccp-props="{"134233117":false,"134233118":false,"201341983":0,"335559685":0,"335559737":240,"335559738":240,"335559739":240,"335559740":279}"> </span></p> <p>Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. <a href="https://openai.com/index/cerebras-partnership/">OpenAI recently announced a multi-year partnership with Cerebras</a>, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. </p> <p>Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.</p></div><h4 id="Own-the-Future-of-AI-Inference" data-local-id="27e2ad7d-fed5-4399-8636-9d7785e16b0a" data-renderer-start-pos="568">Own the Future of AI Inference</h4> <p data-renderer-start-pos="600" data-local-id="d08d13f3-579c-4c92-a359-e166de311ab9">Cerebras powers the world's fastest AI inference. As the Product Manager for AI Models, you'll lead the strategic model portfolio that defines our product — deciding which models ship, how they perform, and how the world discovers them.</p> <p data-renderer-start-pos="838" data-local-id="4c18318d-cea4-42d1-83d7-1ecaf41c2c13">You'll partner directly with leading AI labs, drive launches that shape the industry, and ensure every model on our platform delivers exceptional quality at unprecedented speed.</p> <h4 id="What-You'll-Own" data-local-id="eeeca7a0-9050-41be-8319-446a655c863f" data-renderer-start-pos="1017">What You'll Own</h4> <h5 data-renderer-start-pos="1034" data-local-id="17edc850-f094-4f73-bac4-dfeb01e88274"><strong data-renderer-mark="true">Strategic Model Portfolio</strong></h5> <ul> <li data-renderer-start-pos="1063" data-local-id="d0add9b4-eed3-4ebd-b8ed-a5ec55d2771d">Own the models roadmap: decide which frontier and open-source models we support based on market demand, research trends, and strategic fit</li> <li>Establish partnerships with top model labs, for day0 launches</li> <li data-renderer-start-pos="1337" data-local-id="95183413-f570-4c16-aa98-01fe08d20b3e">Build relationships with open-source maintainers to accelerate community model adoption</li> </ul> <h5 data-renderer-start-pos="1428" data-local-id="e826256c-0d1e-4dca-856e-c215a14811f1"><strong data-renderer-mark="true">Product Quality Customer Success</strong></h5> <ul> <li data-renderer-start-pos="1466" data-local-id="588f966a-b91b-47e1-ad23-d6dd1ccc7aff">Define and enforce quality standards across our model catalog through systematic evaluation frameworks</li> <li data-renderer-start-pos="1572" data-local-id="4eac9d2c-357c-4600-b3a5-f2010e31a38b">Design benchmarks and evaluations that prove our models deliver production-grade performance</li> <li data-renderer-start-pos="1668" data-local-id="8499755a-f89b-4bdd-837c-b4cfe9529509">Own the feedback loop: gather customer insights, identify model weaknesses, and drive improvements with engineering</li> <li data-renderer-start-pos="1787" data-local-id="6d6c027f-6fa9-4566-bcba-d211a4c7e48c">Enable strategic customers to integrate our inference into their products—removing blockers and optimizing for their specific use cases</li> </ul> <h5 data-re ... (truncated, view full listing at source)