Inference Infrastructure Engineer

Rhoda AI
Palo AltoPosted 12 May 2026

Job Description

Inference Infrastructure Engineer At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality. We're looking for an Inference Infrastructure Engineer to help build and operate the systems that power our model deployment stack. You'll be responsible for running large foundation models efficiently and reliably across cloud and on-prem environments, with a focus on resource management, scheduling, and infrastructure scalability. What You'll Do - Design and operate large-scale infrastructure to run model workloads across cloud and on-prem environments - Build and maintain Kubernetes-based deployment pipelines for managing distributed ML workloads - Own resource scheduling and orchestration across GPU clusters — optimizing utilization, workload balancing, and cost-performance tradeoffs - Integrate and manage ML frameworks and model serving systems (e.g., Triton, Ray Serve, TorchServe) across research and production use cases - Build tooling for model deployment, versioning, and observability to support fast iteration cycles - Contribute to the reliability and scalability of the infrastructure stack as model complexity and deployment footprint grow What We're Looking For - 3+ years of experience in ML infrastructure, MLOps, or distributed systems - Strong proficiency with Kubernetes and containerized deployment pipelines - Experience with GPU orchestration and resource scheduling across large distributed jobs - Experience with cloud providers (e.g., AWS, GCP) and hybrid cloud/on-prem infrastructure - Familiarity with ML frameworks (e.g., PyTorch, JAX) and model serving tools (e.g., Triton, Ray Serve, TorchServe) - Strong debugging instincts and ownership mentality — comfortable driving issues to resolution across the stack Nice to Have (But Not Required) - Experience with streaming systems or high-throughput data transport (e.g., Kafka, gRPC, NATS) - Background in networking, low-latency systems, or network-aware scheduling - Experience with edge/cloud hybrid deployment patterns and the latency constraints that come with them - Familiarity with on-robot or embedded inference environments - Experience with large-scale cluster topology and scheduling systems (e.g., SLURM, Ray, Volcano) Why This Role - Own the infrastructure layer that connects our foundation models to real robot behavior — a direct line between your work and what the robot does in the world - Be part of building the infrastructure stack for one of the most technically ambitious robotics companies in the world
Apply Now

Direct link to company career page

AI Resume Fit Check

See exactly which skills you match and which are missing before you apply. Free, instant, no spam.

Check my resume fit

Free · No credit card

Share