Job Description
Member of Technical Staff - Post Training, Applied (Text)
ABOUT LIQUID AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
THE OPPORTUNITY
This is a rare chance to own applied post-training work end-to-end for text workloads, adapting Liquid Foundation Models for some of the world’s largest enterprise customers.
You will act as the technical bridge between customer requirements and model delivery. You will lead engagements from scoping through evaluation, with full ownership over how text models are adapted and shipped. Between engagements, you will build reusable applied workflows and tooling that accelerate future delivery.
If you care about data quality, evaluation design, and making language models actually work in production for real customers, this is the role.
WHAT WE'RE LOOKING FOR
We need someone who:
- Takes ownership: Owns customer post-training projects end-to-end, from requirements through delivery and evaluation.
- Thinks end-to-end: Can reason across data generation, instruction tuning, alignment, and evaluation as a single system.
- Is pragmatic: Optimizes for model quality and customer outcomes over publications or theory.
- Communicates clearly: Can translate between customer needs and internal technical teams, and push back when needed.
THE WORK
- Act as the technical owner for enterprise customer post-training engagements involving text workloads
- Translate customer requirements into concrete post-training specifications and workflows
- Design and execute data generation, filtering, and quality assessment processes for text corpora
- Run supervised fine-tuning, instruction tuning, RLHF, DPO, and other preference alignment workflows
- Design task-specific evaluations for text model performance and interpret results
- Build reusable applied tooling and workflows that accelerate future customer engagements
DESIRED EXPERIENCE
Must-have:
- Hands-on experience with data generation and evaluation for LLM post-training
- Experience training or fine-tuning models using SFT, instruction tuning, RLHF, DPO, or similar preference alignment methods
- Strong intuition for text data quality and evaluation design
- Experience with text-specific post-training workflows: chat model alignment, instruction tuning, or text data curation at scale
- Proficiency with open-source ML ecosystem (Hugging Face, PyTorch) and modern model architectures
Nice-to-have:
- Experience delivering applied ML work to external customers with measurable outcomes
- Familiarity with inference optimization frameworks (vLLM, SGLang, TensorRT)
- Experience building reusable ML tooling or evaluation infrastructure
WHAT SUCCESS LOOKS LIKE (YEAR ONE)
- Independently owns and delivers enterprise post-training projects for text workloads with minimal oversight
- Is trusted by customers as the technical owner, demonstrating strong judgment and delivery quality
- Has built reusable applied workflows or tooling that accelerate future customer engagements
WHAT WE OFFER
- Real ML work: You will fine-tune models, generate data, and ship solutions, not configure API calls. Your work feeds directly back into our core model development.
- Compensation: Competitive base salary with equity in a unicorn-stage company
- Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
- Financial: 401(k) matching up to 4% of base pay
- Time Off: Unlimited PTO plus company-wide Refill Days throughout the year