Technical Product Manager - Cluster Experience

Nebius
Amsterdam, Netherlands; Berlin, Germany; France; Netherlands; Prague, Czech Republic; Remote - EuropePosted 12 March 2026

Job Description

Why work at Nebius Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field. Where we work Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with RD hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI RD team. The role We're building the leading platform for large-scale Machine Learning (ML) training and inference — powering workloads from a few nodes to thousands of GPUs. We're looking for a Product Manager who will define how customers experience GPU clusters: their reliability, performance, and overall usability at scale. As a Product Manager in the Cluster Experience team, you will own foundational tracks that shape how ML teams train and serve models on multi-node distributed systems. Your initial focus will be reliability, performance, and observability for large-scale training and distributed inference. Over time, the role expands into UX, operational tooling, and advanced cluster workflows. This is a deeply technical PM role, but it does not require prior product management experience - strong candidates with backgrounds in ML infrastructure, distributed systems, SRE, or cloud engineering who want to grow into product are welcome. If you want to influence how state-of-the-art models are trained and deployed at scale, this role is for you. Your responsibilities will include: Own key tracks in Cluster Experience: reliability, performance, and user experience for distributed ML workloads. Define product direction from problem discovery → design → delivery → adoption, working closely with engineering and research teams. Drive cross-functional execution across compute, networking, storage, observability, and platform teams. Perform deep customer research: interviews, analytics, and workload studies to identify bottlenecks across hardware, network, scheduler, and runtime. Translate state-of-the-art ML papers ideas into practical, scalable product features for large GPU clusters. Shape how users interact with clusters - from dashboards and notifications to partitioning, node management, and training observability. We expect you to have: 3–5+ years of experience in product management, ML infrastructure/MLOps, distributed systems engineering, or cloud architecture. Strong technical foundation in computer science, distributed systems, or ML infrastructure. Hands-on familiarity with ML training, ideally using orchestrators like Slurm, Kubernetes, Ray, or similar systems. Proven ability to ship technically complex features with multiple engineering teams. Excellent communicator capable of influencing engineering, research, and customer stakeholders. Experience with product analytics, data-driven prioritization, and experiment design. Strong willingness and ability to learn quickly in a fast-evolving ML and infrastructure environment. It will be an added bonus if you have: Experience working with GPU platforms, Infiniband/RDMA networking, or HPC systems. Understanding of modern ML frameworks (PyTorch, DeepSpeed, FSDP, NCCL, etc.). Knowledge of ML training efficiency: Goodput, MFU, scheduling, health checks. Exposure to LLM training, distributed data/ZeRO/FSDP strategies, or transformer inference. Experience in observability, performance tuning, or reliability engineering. Customer-facing technical experience (supporting ML or infrastructure workloads). About Nebius Nebius AI is an AI cloud platform with one of the largest GPU capacities in Europe. La ... (truncated, view full listing at source)
Apply Now

Direct link to company career page

Share