Job Description
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com .
About the Role
CoreWeave runs some of the largest GPU clusters in the world. The AI infrastructure behind those clusters determines how workloads are placed, how resources are shared, and how reliably systems perform under constant pressure.
As a Principal Engineer in AI Infrastructure, you will lead the design and evolution of the cluster orchestration systems that make this possible. This includes Slurm, Kubernetes, SUNK, and the control planes that support AI training, inference, and model onboarding at scale.
You will define long-term architecture, solve hard scaling problems, and set technical direction across teams. Your work will directly affect how quickly customers can run models, how efficiently we use GPUs, and how reliably the platform behaves at scale.
What You’ll Do
Architecture and Technical Direction
Define the long-term architecture for CoreWeave’s orchestration platforms across Kubernetes, Slurm, SUNK, Kueue, and related systems.
Act as a technical authority on scheduling, quota enforcement, fairness, pre-emption, and multi-tenant GPU isolation.
Make design decisions that balance performance, reliability, cost, and operational complexity.
Orchestration Platform Development
Lead the evolution of Kubernetes-native control planes, including SUNK and custom operators.
Design systems that support workload admission, validation, and rollout, including model onboarding flows.
Identify and remove scaling limits across schedulers, control planes, registries, networking, and storage.
Reliability and Operations
Set standards for reliability, observability, and operational readiness across orchestration services.
Define SLOs, alerting, and incident response practices for platform-critical systems.
Ensure systems behave predictably during failures, peak load, and rapid growth.
Hands-on Engineering
Write and review production code for Kubernetes controllers, schedulers, admission logic, and internal tooling.
Measure and improve scheduling latency, container startup time, image distribution, and cold-start performance.
Lead architecture and design reviews across infrastructure teams.
Leadership and Influence
Mentor senior and staff engineers and help grow technical leaders.
Influence platform, infrastructure, security, and product teams through clear technical judgment.
Engage with customers and open-source communities on deep technical topics when needed.
Who You Are
15+ years of experience building and operating large-scale distributed systems.
Deep, practical knowledge of Kubernetes and Slurm internals.
Experience running GPU-heavy platforms for AI training, inference, or HPC workloads.
Strong background in Go and cloud-native systems development.
Proven ability to set technical direction across teams without direct authority.
Comfortable making high-impact technical decisions in complex systems.
Bachelor’s or Master’s degree in a relevant field, or equivalent experience.
Preferred Qualifications
Experience with systems such as Kueue, Kubeflow, Argo Workflows, Ray, Istio, or Knative.
Background in ML platform engineering, model onboarding, or lifecycle management.
Strong understanding of scheduling strategies, pre-emption, quota enforcement, and elastic scaling.
Track record of operating highly reliable systems with clear SLOs and incident processes.
Contributions to Kubernetes, ML infrastructure, or related open-source projects.
Experience mentoring senior e ... (truncated, view full listing at source)