Staff Enterprise AI Engineer

Peloton
New York, New YorkPosted 24 February 2026

Job Description

<p><strong>ABOUT THE ROLE </strong></p> <p>Peloton is looking to transform our enterprise tech strategy with AI adoption. We are looking for a <strong>Staff Enterprise AI Engineer</strong> to serve as the "Founding Engineer" of our Enterprise AI Platform.This is not a traditional Data Science role. You will not spend your days tweaking hyperparameters. Instead, you will architect and build the <strong>Operating System</strong> that enables our Product, People, and Operations teams to deploy AI Agents safely and at scale. You will act as a "Player/Coach," laying the technical foundation (Infrastructure, Security, Orchestration) while guiding a team of engineers to execute the vision. You will build the "Golden Path" that helps everyone at Peloton to leverage AI securely for the competitive advantage of Peloton.</p> <p><strong>YOUR DAILY IMPACT AT PELOTON</strong></p> <ul> <li>Architect the "Intelligence Integration" Layers</li> <li>Design and build a scalable Agentic Orchestration Platform (using LangChain, LangGraph, or custom frameworks) that allows internal developers to spin up autonomous agents.</li> <li>Implement the "Integration Layer" ensuring all AI agents connect to internal APIs (Workday, Snowflake, SAP) via secure, standardized protocols (Model Context Protocol - MCP).</li> <li>Solve the "State Problem" for AI, architecting memory stores (Vector DBs like Pinecone/Weaviate) that persist context across user sessions.</li> <li>Enforce "Security by Design"</li> <li>Partner with Security leadership to implement Identity Propagation. Ensure agents execute tasks using the <em>user’s</em> specific OAuth scopes, preventing privilege escalation.</li> <li>Build "Data Clean Rooms" and PII masking pipelines to ensure sensitive member or employee data is never leaked to model providers.</li> <li>Deploy EvalOps pipelines to automatically test models for hallucination and regression before they hit production</li> <li>Define the Engineering Standards</li> <li>Define the "Guide vs. Control" standards for the organization. Create the templates and libraries that allow analysts to "Vibe Code" (low-code/assisted coding) safely within our guardrails.</li> <li>Perform rigorous code reviews for partner teams and vendors, ensuring high performance, low latency (<200ms), and cost efficiency</li> <li>Capital-Efficient Scale</li> <li>Optimization of inference costs by implementing Semantic Caching and routing logic (e.g., routing simple queries to smaller/cheaper models).</li> <li>Leverage Kubernetes (EKS) to manage ephemeral compute resources for AI workloads.</li> <li>A Systems Builder: You view AI as a distributed systems problem. You care about latency, rate limiting, and eventual consistency just as much as you care about prompt engineering.</li> <li>A Pragmatist: You don't build "Science Projects." You build tools that solve specific business frictions (e.g., automating Content PR approvals or speeding up Supply Chain queries).</li> <li>A Force Multiplier: You enjoy mentoring senior engineers and demystifying AI for non-technical stakeholders (from HR to Product).</li> </ul> <p><strong>YOU BRING TO PELOTON</strong></p> <ul> <li>Experience: 10+ years of software engineering experience, with 3+ years specifically focused on MLOps, LLM Orchestration, or Large Scale Distributed Systems.</li> <li>The Stack: Deep fluency in Python (production grade) and Go (preferred for platform services).</li> <li>AI Engineering: Proven experience deploying RAG (Retrieval Augmented Generation) and Agentic Workflows in production. Experience with frameworks like LangChain, Semantic Kernel, or similar.</li> <li>Platform Engineering: Strong background in Kubernetes (EKS), Docker, and Infrastructure-as-Code (Terraform).</li> <li>Security: Solid understanding of OAuth 2.0 (OBO flow), RBAC, and zero-trust networking principles.</li> <li>Communication: Ability to explain complex technical trade-offs (e.g., "Latency vs. Accuracy") to executive stakeholders.</li> </ ... (truncated, view full listing at source)