Site Reliability Engineer - AI Infrastructure

Andromeda Cluster
Global Remote / San Francisco, CAPosted 21 March 2026

Job Description

Site Reliability Engineer - AI Infrastructure SITE RELIABILITY ENGINEER - AI INFRASTRUCTURE Location: Global Remote / San Francisco · Full-Time ABOUT ANDROMEDA Andromeda Cluster was founded by Nat Friedman and Daniel Gross to give early-stage startups access to the kind of scaled AI infrastructure once reserved only for hyperscalers. We began with a single managed cluster — but it filled almost instantly. Since then, we’ve been quietly building the systems, network, and orchestration layer that makes the world’s AI infrastructure more accessible. Today, Andromeda works with leading AI labs, data centers, and cloud providers to deliver compute when and where it’s needed most. Our platform routes training and inference jobs across global supply, unlocking flexibility and efficiency in one of the fastest-growing markets on earth. Our long-term vision is to build the liquidity layer for global AI compute — a marketplace that moves the infrastructure and workloads powering AGI not dissimilar to the flows of capital in the world's financial markets. We are expanding to new frontiers to find the brightest that work in AI infrastructure, research and engineering. WHAT YOU’LL DO - Provision, configure, and operate Kubernetes-based clusters for customers across multiple providers. - Build automation and tooling to streamline cluster deployments and integrations. - Debug customer issues across networking, storage, scheduling, and system layers. - Improve reliability and scalability of both training and inference infrastructure. - Design and implement monitoring, alerting, and observability for critical systems. - Collaborate with engineering and product teams to plan and deliver infrastructure for new services. - Participate in on-call and incident response, leading postmortems and reliability improvements. WHAT WE’RE LOOKING FOR - 5+ years experience in SRE, DevOps, or infrastructure engineering roles. - Strong Linux systems and networking fundamentals. - Deep experience with Kubernetes and container orchestration at scale. - Proficiency with Infrastructure-as-Code (Terraform, Helm, Ansible, etc.). - Strong automation and scripting skills (Python, Go, or Bash). - Experience with observability stacks (Prometheus, Grafana, Loki, Datadog, etc.). - Track record of operating production systems and leading incident response. NICE TO HAVE - Exposure to ML/AI infrastructure or GPU-based systems (CUDA, Slurm, Triton, etc.). - Familiarity with high-performance networking (InfiniBand, NVLink) or distributed storage (VAST, Weka, Ceph). - Customer-facing support or consulting experience. WHY YOU’LL LOVE IT HERE This is a builder’s role. You’ll have ownership and autonomy to shape how our systems run, working directly with customers and providers while building the foundation for reliable, scalable AI infrastructure.
Apply Now

Direct link to company career page

AI Resume Fit Check

See exactly which skills you match and which are missing before you apply. Free, instant, no spam.

Check my resume fit

Free · No credit card

Share