Kernel Optimization Engineer – Dubai
Cerebras SystemsUAEPosted 1 March 2026
Job Description
<div class="content-intro"><p><span data-contrast="none">Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. </span><span data-ccp-props="{"134233117":false,"134233118":false,"201341983":0,"335559685":0,"335559737":240,"335559738":240,"335559739":240,"335559740":279}"> </span></p>
<p>Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. <a href="https://openai.com/index/cerebras-partnership/">OpenAI recently announced a multi-year partnership with Cerebras</a>, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. </p>
<p>Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.</p></div><p></p>
<div><strong>About The Role</strong><br>As a Kernel Engineer on our team, you will develop high-performance software solutions at the intersection of hardware and software, developing high-performance software for cutting-edge AI and HPC workloads. Your focus will be on implementing, optimizing, and scaling deep learning operations to fully leverage our custom, massively parallel processor architecture.<br>You will be part of a world-class team responsible for the design, performance tuning, and validation of foundational ML and HPC kernels. This includes building a library of parallel and distributed algorithms that maximize compute utilization and push the boundaries of training efficiency for state-of-the-art AI models. Your work will be critical to unlocking the full potential of our hardware and accelerating the pace of AI innovation.<br><strong>Responsibilities</strong></div>
<ul data-list-tree="true" data-indent="0" data-border="0">
<li>Develop design specifications for new machine learning and linear algebra kernels and mapping to the Cerebras WSE System using various parallel programming algorithms.</li>
<li>Develop and debug kernel library of highly optimized low level assembly instruction and C-like domain specific language routines to implement algorithms targeting the Cerebras hardware system.</li>
<li>Develop and debug high-performance kernel routines in low-level assembly and a custom C-like (CSL) language, implementing algorithms optimized for the Cerebras hardware system.</li>
<li>Using mathematical models and analysis to measure the software performance and inform design decisions.</li>
<li>Develop and integrate unit and system testing methodologies to verify correct functionality and performance of kernel libraries.</li>
<li>Study emerging trends in Machine Learning applications and help evolve Kernel library architecture to address computational challenges of the start-of-the-art Neural Networks.</li>
<li>Interact with chip and system architects to optimize instruction sets, microarchitecture, and IO of next generation systems.</li>
</ul>
<div><strong>Skills And Qualifications</strong></div>
<ul data-list-tree="true" data-indent="0" data-border="0">
<li>Bachelor’s, Master’s, PhD or foreign equivalents in Computer Science, Computer Engineering, Mathematics, or related fields.</li>
<li>Understanding of hardware architecture concepts — must be comfortable learning the details of a new hardware architecture.</li>
<li>Skilled in C++ and Python programming languages.</li>
<li>Good know ... (truncated, view full listing at source)
Apply Now
Direct link to company career page
More jobs at Cerebras Systems
See all →More Python jobs
See all →AI Engineer- Gen AI/SWE- Weights & Biases
Weights and Biases · Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA / Remove - US
AI Customer Support Engineer, Tier I - Weights & Biases
Weights and Biases · Sunnyvale, CA
AI Customer Support Engineer, Tier I - W&B EMEA
Weights and Biases · London, England
Analytics Engineer - Weights & Biases
Weights and Biases · San Francisco, CA / Remote - US