Research, ML
ExaSan Francisco, CaliforniaPosted 21 February 2026
Tech Stack
Job Description
Exa is building a search engine from scratch to serve every AI application. We build massive-scale infrastructure to crawl the web, train state-of-the-art embedding models to index it, and develop super high performant vector databases in Rust to search over it. We also own a $5M H200 GPU cluster that regularly lights up tens of thousands of machines.On the ML team, we train foundational models for search. Our goal is to build systems that can instantly filter the world's knowledge to exactly what you want, no matter how complex your query. Basically, put the web into an extremely powerful database.We're looking for an ML Research Engineer to train embedding models for perfect search over the web. The role involves dreaming up novel transformer-based search architectures, creating datasets, creating evals, beating our internal SoTA, and repeat.Desired ExperienceYou have graduate-level ML experience (or are an exceptionally strong undergrad)You can code up a transformer from scratch in PyTorchYou like creating large-scale datasets and diving deeply into the dataYou care about the problem of finding high quality knowledge and recognize how important this is for the worldExample ProjectsPre-training: Train a hundred billion parameter modelFine-tuning: Build an RLAIF pipeline for searchDream up a novel architecture for search in the shower, then code it up and beat our best model's top scoreBuild an eval system that answers how do we know we're advancing our search quality? (this is an incredibly difficult question to answer)This is an in-person opportunity in San Francisco. We're happy to sponsor international candidates (e.g., STEM OPT, OPT, H1B, O1, E3).
Apply Now
Direct link to company career page
AI Resume Fit Check
See exactly which skills you match and which are missing before you apply. Free, instant, no spam.
Check my resume fitFree · No credit card