Member of Technical Staff - ML Research Engineer, Data

Liquid AI
Research & EngineeringPosted 24 February 2026

Job Description

About Liquid AISpun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.The OpportunityOur Data team powers Liquid Foundation Models across pre-training, vision, audio, and emerging modalities. Public data sources are plateauing. Model performance increasingly depends on purpose-built datasets. We need ML-minded engineers who can collect, filter, and synthesize high-quality data at scale.We treat data as a research problem, not an infrastructure problem. Our engineers run experiments, design ablations, and measure how data decisions move model quality. We will match you to the team where you can grow the fastest and have the most impact: pre-training, post-training RL, vision-language, audio, or multimodal.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe need someone who:Thinks like a researcher, ships like an engineer: We need people who form hypotheses, run experiments, and measure results. Our engineers write research-quality code, and our researchers ship production systems.Learns fast and adapts: We work across modalities that evolve weekly. We need people who pick up new domains quickly and thrive with ambiguity.Obsesses over data quality: We believe data quality is non-negotiable. Filtering, deduplication, augmentation, and evaluation are first-class concerns for our team, not afterthoughts.Solves problems independently: Our data engineers sit within training groups (pre-training and multimodal). We collaborate closely, but we expect ownership and self-direction.The WorkBuild and maintain data processing, filtering, and selection pipelines at scaleCreate pipelines for pretraining, midtraining, SFT, and preference optimization datasetsDesign synthetic data generation systems using LLMs, structured prompting, and domain-specific generatorsDesign and run evaluations and ablations to measure dataset's impact on model performanceMonitor public datasets across text, vision, and audio domainsCollaborate with pre-training, vision, and audio teams on modality-specific data needsDesired ExperienceMust-have:Strong Python skills with the ability to quickly comprehend problems and translate them into clean, working codeSolid ML fundamentals: experience training, evaluating, and iterating on models (PyTorch preferred)Track record of learning new technical domains quickly3+ years relevant experience with an M.S., or 1+ year with a Ph.D. (5+ years with a B.S.)Nice-to-have:Experience with synthetic data generation, data curation, or ML evaluation (designing evals, benchmarking, measuring data and model quality)Experience with LLMs, VLMs, computer vision, or audio data pipelinesOpen-source contributions or publications at NeurIPS, ICML, ICLR, or CVPRWhat Success Looks Like (Year One)You own a critical data pipeline end-to-end for one of our modalitiesYou have built or improved data systems that measurably moved model performanceYou have identified and integrated at least one external dataset that moved the needleWhat We OfferImpact at scale: Your pipelines directly determine model quality across all of Liquid's foundation models.Compensation: Competitive base salary with equity in a unicorn-stage companyHealth: We pay 100% of medical, dental, and vision premiums for employees and dependentsFinancial: 401(k) matching up to 4% of base payTime Off: Unlimited PTO plus company-wide Refill Days throughout the year