AI Infra Engineer

Perplexity
San Francisco; Palo AltoPosted 24 February 2026

Job Description

We are looking for an AI Infra engineer to join our growing team. We work with Kubernetes, Slurm, Python, C++, PyTorch, and primarily on AWS. As an AI Infrastructure Engineer, you will be partnering closely with our Inference and Research teams to build, deploy, and optimize our large-scale AI training and inference clustersResponsibilitiesDesign, deploy, and maintain scalable Kubernetes clusters for AI model inference and training workloadsManage and optimize Slurm-based HPC environments for distributed training of large language modelsDevelop robust APIs and orchestration systems for both training pipelines and inference servicesImplement resource scheduling and job management systems across heterogeneous compute environmentsBenchmark system performance, diagnose bottlenecks, and implement improvements across both training and inference infrastructureBuild monitoring, alerting, and observability solutions tailored to ML workloads running on Kubernetes and SlurmRespond swiftly to system outages and collaborate across teams to maintain high uptime for critical training runs and inference servicesOptimize cluster utilization and implement autoscaling strategies for dynamic workload demandsQualificationsStrong expertise in Kubernetes administration, including custom resource definitions, operators, and cluster managementHands-on experience with Slurm workload management, including job scheduling, resource allocation, and cluster optimizationExperience with deploying and managing distributed training systems at scaleDeep understanding of container orchestration and distributed systems architectureHigh level familiarity with LLM architecture and training processes (Multi-Head Attention, Multi/Grouped-Query, distributed training strategies)Experience managing GPU clusters and optimizing compute resource utilizationRequired SkillsExpert-level Kubernetes administration and YAML configuration managementProficiency with Slurm job scheduling, resource management, and cluster configurationPython and C++ programming with focus on systems and infrastructure automationHands-on experience with ML frameworks such as PyTorch in distributed training contextsStrong understanding of networking, storage, and compute resource management for ML workloadsExperience developing APIs and managing distributed systems for both batch and real-time workloadsSolid debugging and monitoring skills with expertise in observability tools for containerized environmentsPreferred SkillsExperience with Kubernetes operators and custom controllers for ML workloadsAdvanced Slurm administration including multi-cluster federation and advanced scheduling policiesFamiliarity with GPU cluster management and CUDA optimizationExperience with other ML frameworks like TensorFlow or distributed training librariesBackground in HPC environments, parallel computing, and high-performance networkingKnowledge of infrastructure as code (Terraform, Ansible) and GitOps practicesExperience with container registries, image optimization, and multi-stage builds for ML workloadsRequired ExperienceDemonstrated experience managing large-scale Kubernetes deployments in production environmentsProven track record with Slurm cluster administration and HPC workload managementPrevious roles in SRE, DevOps, or Platform Engineering with focus on ML infrastructureExperience supporting both long-running training jobs and high-availability inference servicesIdeally, 3-5 years of relevant experience in ML systems deployment with specific focus on cluster orchestration and resource management