Research Engineer, Evals

Variance
San FranciscoPosted 31 March 2026

Tech Stack

Job Description

Research Engineer, Evals ROLE At Variance, we are teaching machines to make the hardest judgment calls at scale. That means building AI agents for the high-stakes gray area of risk investigations, fraud, and identity reviews. We’re a small, talent-dense team in San Francisco working on a problem at the edge of what AI systems can reliably do: making good decisions in messy, adversarial, real-world environments. We focus on building, high-consequence systems problems where the edge cases matter most. We’re looking for a Research Engineer to help define how we measure and improve model quality. You’ll build the benchmarks, datasets, tooling, and evaluation loops that tell us whether our systems are actually getting better on the tasks that matter. This role sits at the center of research, product, and engineering. It is about creating rigorous, domain-specific evaluations that reflect real customer workflows, expose meaningful failure modes, and drive the next generation of model and agent improvements. YOU’RE A FIT IF YOU: - Care deeply about craftsmanship and have strong opinions about model quality, measurement, and experimental rigor - Want to work on core model and agent behavior, not just surface-level product metrics - Are excited by the challenge of defining what “good” looks like in messy, high-stakes environments - Think in tight loops: hypothesis, benchmark design, evaluation, failure analysis, iteration - Have strong engineering fundamentals and like building robust systems around ambiguous research problems - Thrive in environments where success criteria are initially underspecified and need to be sharpened through work - Are willing to do the work in the trenches: reviewing outputs, grading edge cases, curating datasets, and refining tasks until the evaluation actually measures what matters - Care deeply about building systems that protect people from fraud, scams, and abuse WHAT YOU’LL DO - Build proprietary benchmarks and datasets to evaluate models and model systems on fraud, identity, and risk workflows - Design and run offline and online evals that measure model performance on real customer tasks, not just abstract benchmarks - Define quality metrics for judgment systems, including precision, calibration, consistency, abstention, and failure handling - Study where models and agents break, and turn those failures into better evals, better datasets, and better training loops - Build reusable evaluation tools and quality building blocks that can be used across different product surfaces and workflows - Partner closely with research, engineering, product, and design to improve system quality through rigorous experimentation - Help create a strong culture of scientific experimentation, clear measurement, and continuous iteration - Push the boundary of how AI systems are evaluated in regulated, adversarial, and high-consequence environments WHAT SUCCESS LOOKS LIKE - We have a clear, trusted view of how our systems perform across the workflows that matter most - Our evals predict real-world quality better than generic benchmarks - We identify meaningful failure modes earlier and improve system behavior faster - We develop differentiated datasets, benchmarks, and quality loops that compound over time - Research and engineering teams use your work to make better decisions about what to train, ship, and improve next - Variance becomes known for rigorous, domain-specific evaluation of judgment systems PREFERRED BACKGROUND - Experience training, evaluating, or improving modern ML systems - Strong programming skills and comfort working in research-heavy codebases - Experience building benchmarks, datasets, evaluation pipelines, or quality systems - Familiarity with LLMs, agent systems, retrieval, post-training, or adjacent areas - Ability to design clean experiments and draw reliable conclusions from noisy results - Strong engineering judgment and a bias toward building - Interest in fr ... (truncated, view full listing at source)
Apply Now

Direct link to company career page

AI Resume Fit Check

See exactly which skills you match and which are missing before you apply. Free, instant, no spam.

Check my resume fit

Free · No credit card

Share