ML Engineer - Inference Serving
Luma AIPalo AltoPosted 5 March 2026
Job Description
About Luma AI
Luma's mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
Luma’s mission is to build multimodal AI to expand human imagination and capabilities.
We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. We are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to affect change. We know we are not going to reach our goal with reliable & scalable infrastructure, which is going to become the differentiating factor between success and failure.
Role & Responsibilities
Ship new model architectures by integrating them into our inference engine
Collaborate closely across research, engineering and infrastructure to streamline and optimize model efficiency and deployments
Build internal tooling to measure, profile, and track the lifetime of inference jobs and workflows
Automate, test and maintain our inference services to ensure maximum uptime and reliability
Optimize deployment workflows to scale across thousands of machines
Manage and optimize our inference workloads across different clusters & hardware providers
Build sophisticated scheduling systems to optimally leverage our expensive GPU resources while meeting internal SLOs
Build and maintain CI/CD pipelines for processing/optimizing model checkpoints, platform components, and SDKs for internal teams to integrate into our products/internal tooling
Background
Strong Python and system architecture skills
Experience with model deployment using PyTorch, Huggingface, vLLM, SGLang, tensorRT-LLM, or similar
Experience with queues, scheduling, traffic-control, fleet management at scale
Experience with Linux, Docker, and Kubernetes
Bonus points:
Experience with modern networking stacks, including RDMA (RoCE, Infiniband, NVLink)
Experience with high performance large scale ML systems (>100 GPUs)
Experience with FFmpeg and multimedia processing
Example Projects
Create a resilient artifact store that manages all checkpoints across multiple versions of multiple models
Enable hotswapping of models for our GPU workers based on live traffic patterns
Build a robust queueing system for our jobs that take into account cluster availability and user priority
Architect a e2e model serving deployment pipeline for a custom vendor
Integrate our inference stack into an online reinforcement learning pipeline
Regression & precision testing across different hardware platforms
Building a full tracing system to trace the end-to-end lifetime of any inference workload
Tech stack
Must have
Python
Redis
S3-compatible Storage
Model serving (one of: PyTorch, vLLM, SGLang, Huggingface)
Understanding of large-scale orchestration, deployment, scheduling (via Kubernetes or similar)
Nice to have
CUDA
FFmpeg
Apply Now
Direct link to company career page