Senior MLOps Engineer

Prolific
Remote, UKPosted 21 February 2026

Job Description

<h1 style="text-align: center;"><span style="font-family: tahoma, arial, helvetica, sans-serif; color: rgb(126, 140, 141);">Senior MLOps Engineer</span></h1> <p style="text-align: center;"><span style="font-family: tahoma, arial, helvetica, sans-serif;"> </span></p> <p style="text-align: left;"><span style="font-family: tahoma, arial, helvetica, sans-serif;">Prolific is not just another player in the AI space – we are the architects of the human data infrastructure that's reshaping the landscape of AI development. In a world where foundational AI technologies are increasingly commoditized, it's the quality and diversity of human-generated data that truly differentiates products and models.</span></p> <p style="text-align: center;"> </p> <p style="text-align: center;"><span style="font-family: tahoma, arial, helvetica, sans-serif; color: rgb(126, 140, 141);"><em>The Role</em></span></p> <p><span style="font-family: tahoma, arial, helvetica, sans-serif;">The future of AI development relies on a critical, indispensable component: high-quality human data. Prolific provides the world's largest and most trusted source of this data to the teams pushing the boundaries of AI technology. </span>As a Senior ML/LLMOps Engineer, you will be the backbone of our AI production lifecycle. You will bridge the gap between research and real-world application, ensuring our Data Scientists and AI Researchers have the high-performance infrastructure, automated pipelines, and deployment strategies needed to ship state-of-the-art models at scale. We deploy models and infrastructure responsible for a host of AI tasks, ranging from fraud detection to RAG based search.</p> <p style="text-align: center;"><em>Who We’re Looking For</em></p> <ul> <li>5+ years experience with cloud infrastructure and infrastructure as code.</li> <li>Previous experience with the ML and LLM lifecycle - training, hosting, optimisation, observability.</li> <li>Used to working closely with researchers and data scientists - taking experiments from worksheets into production.</li> <li>Strong grasp of ML fundamentals and modern GenAI stack.</li> </ul> <p><span style="font-family: tahoma, arial, helvetica, sans-serif;"> </span></p> <p style="text-align: center;"><em>What You’ll be Doing</em></p> <p>Infrastructure Platform Engineering</p> <ul> <li>Infrastructure as Code (IaC): Design and maintain scalable cloud environments (GCP/AWS) using Terraform.</li> <li>Resource Provisioning: Manage GPU/TPU resource allocation for training, fine-tuning, and interactive notebooks.</li> <li>Custom Tooling: Build internal services and CLI tools to streamline the developer experience for the AI team.</li> </ul> <p>ML LLM Orchestration</p> <ul> <li>Automated Pipelines: Design CI/CD/CT (Continuous Training) pipelines using tools such as GitHub Actions, MLFlow, Vertex AI Pipelines. Ensure high quality training data (e.g. introducing a feature store).</li> <li>Deployment Methodology: Develop reusable patterns for model serving. Managing service deployments to Kubernetes.</li> <li>Vector Infrastructure: Manage and optimize vector databases and embedding pipelines for RAG-based systems.</li> </ul> <p>Performance Optimization</p> <ul> <li>Inference Optimization: Implement techniques to reduce latency and increase throughput.</li> <li>Cold Start Mitigation: Solve scaling bottlenecks for serverless or containerized model deployments.</li> <li>Cost Management: Optimize GPU utilization and cloud spend without compromising performance.</li> </ul> <p>Observability Reliability</p> <ul> <li>Traditional MLOps: Monitor for model drift, data skew, and resource utilization.</li> <li><span style="font-family: tahoma, arial, helvetica, sans-serif;">LLM-Specific Ops:</span><span style="font-family: tahoma, arial, helvetica, sans-serif;"> Implement LLM Tracing to monitor prompts, agent actions and general service health.</span></li> </ul> <p style="text-align: center;"> </p> <p style="text-align: center;"><span style="colo ... (truncated, view full listing at source)