Member of Technical Staff - Distributed Training Engineer
Liquid AIResearch & EngineeringPosted 24 February 2026
Tech Stack
Job Description
About Liquid AISpun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.The OpportunityOur Training Infrastructure team is building the distributed systems that power our next-generation Liquid Foundation Models. As we scale, we need to design, implement, and optimize the infrastructure that enables large-scale training.This is a high-ownership training systems role focused on runtime/performance/reliability (not a general platform/SRE role). You’ll work on a small team with fast feedback loops, building critical systems from the ground up rather than inheriting mature infrastructure.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe need someone who:Loves distributed systems complexity: Our team builds systems that keeps long training runs stable, debugs training failures across GPU clusters, and improves performance.Wants to build: We need builders who find satisfaction in robust, fast, reliable infrastructure.Thrives in ambiguity: Our systems support model architectures that are still evolving. We make decisions with incomplete information and iterate quickly.Aligns with team priorities and delivers: Our best engineers align with team priorities while pushing back with data when they see problems.The WorkDesign and build core systems that make large training runs fast and reliableBuild scalable distributed training infrastructure for GPU clustersImplement and tune parallelism/sharding strategies for evolving architecturesOptimize distributed efficiency (topology-aware collectives, comm/compute overlap, straggler mitigation)Build data loading systems that eliminate I/O bottlenecks for multimodal datasetsDevelop checkpointing mechanisms balancing memory constraints with recovery needsCreate monitoring, profiling, and debugging tools for training stability and performanceDesired ExperienceMust-have:Hands-on experience building distributed training infrastructure (PyTorch Distributed DDP/FSDP, DeepSpeed ZeRO, Megatron-LM TP/PP)Experience diagnosing performance bottlenecks and failure modes (profiling, NCCL/collectives issues, hangs, OOMs, stragglers)Understanding of hardware accelerators and networking topologiesExperience optimizing data pipelines for ML workloadsNice-to-have:MoE (Mixture of Experts) training experienceLarge-scale distributed training (100+ GPUs)Open-source contributions to training infrastructure projectsWhat Success Looks Like (Year One)Training throughput has increasedOverall training efficiency/cost has improvedTraining stability has improved (fewer failures, faster recovery)Data loading bottlenecks are eliminated for multimodal workloadsWhat We OfferGreenfield challenges: Build systems from scratch for novel architectures. High ownership from day one.Compensation: Competitive base salary with equity in a unicorn-stage companyHealth: We pay 100% of medical, dental, and vision premiums for employees and dependentsFinancial: 401(k) matching up to 4% of base payTime Off: Unlimited PTO plus company-wide Refill Days throughout the year
Apply Now
Direct link to company career page
More jobs at Liquid AI
See all →Member of Technical Staff - ML Research Engineer, Multi-Modal - Audio
Research & Engineering · 24 February 2026
Member of Technical Staff - Multi-Modal, Vision
Research & Engineering · 24 February 2026
Member of Technical Staff - ML Research Engineer, Data
Research & Engineering · 24 February 2026
Member of Technical Staff - Post Training, Applied
Research & Engineering · 24 February 2026