Network Architect

Cerebras Systems
Sunnyvale, CAPosted 1 March 2026

Job Description

<div class="content-intro"><p><span data-contrast="none">Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. </span><span data-ccp-props="{"134233117":false,"134233118":false,"201341983":0,"335559685":0,"335559737":240,"335559738":240,"335559739":240,"335559740":279}"> </span></p> <p>Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. <a href="https://openai.com/index/cerebras-partnership/">OpenAI recently announced a multi-year partnership with Cerebras</a>, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. </p> <p>Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.</p></div><h4>About The Role</h4> <p>As a Network Architect on the Cluster Architecture Team, you will work closely with the vendors, internal networking teams and industry peers to develop best-in-class interconnect architecture of the current and future generations of the Cerebras AI clusters. You will be responsible for developing proof-of-concept of new network designs and features enabling resilient and reliable network for AI workloads. The role will require cross-functional collaboration and interaction with diverse hardware components (e.g., network devices and the Wafer-Scale Engine) as well as software at several layers of the stack, from host-side networking to cluster-level coordination. The role also requires understanding of network monitoring systems and network debugging methodologies.</p> <h4>Responsibilities</h4> <ul> <li>Design AI/ML and HPC Clusters.</li> <li>Identify and address performance or efficiency bottlenecks, ensuring high resource utilization, low latency, and high throughput communication.</li> <li>Drive technical projects involving multiple teams, various software and hardware components coming together to realize advanced Networking technologies.</li> <li>Bring effective communication skills.</li> <li>Collaborate with vendors and industry peers to drive network hardware and feature roadmap.</li> <li>Represent Cerebras in industry forums.</li> <li>Central point of contact for any network reliability issues.</li> </ul> <h4>Skills Qualifications</h4> <ul> <li>Ph.D. in Computer Science or Electrical Engineering + 10 years industry experience or Master’s in CS or EE + 15 years industry experience.</li> <li>8+ Years of experience in large scale network designs in WAN or Datacenter.</li> <li>Extensive experience debugging networking issues in large distributed systems environment with multiple networking platforms and protocols.</li> <li>Experience of managing and leading multi-phase and multi-team projects.</li> <li>Networking platforms like Juniper, Arista, Cisco, Open box architectures (Sonic, FOBSS).</li> <li>Networking protocols like RoCE, BGP, DCQCN, PFC, Streaming telemetry.</li> <li>Familiarity with automation languages like Python, or Go.</li> <li>Familiarity with Network visibility and management systems.</li> </ul><div class="content-conclusion"><h4><strong>Why Join Cerebras</strong></h4> <p>People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of mod ... (truncated, view full listing at source)