Machine Learning Engineer

Vannevar Labs
Remote$150k – $215kPosted 8 March 2026

Job Description

<div class="content-intro"><p><strong data-stringify-type="bold">Vannevar is a defense technology company building AI to deter our adversaries.</strong> In the 21st century, conflict moves at algorithmic speed and foresight equals firepower. Our agentic AI is purpose-built to compete with China—from cross-Strait conflict to gray zone coercion. Trained on the most mission-relevant datasets in defense, our technology models adversary behavior, simulates campaigns, and recommends the best course of action to decision makers. Our AI systems are some of the most trusted in the industry and actively used on the front lines of the Indo-Pacific to keep the peace and save lives.</p> <p>Exceptional technology starts with exceptional people. Vannevar is a small agile team combining world-class engineers with veteran strategists who bring deep expertise in defense and tradecraft. We’re building a company defined by mission impact, user empathy, and disciplined growth. In just three years, we grew from $3M to $80M in ARR, achieved early profitability, and reached unicorn status—proving that disruption doesn’t require an ego, and staying power doesn’t mean standing still.</p></div><h2>About the Role</h2> <p>Machine learning is core to Vannevar's enrichment capabilities, powering intelligent data extraction, classification, and augmentation at scale. Our ML team builds the services and infrastructure that enable products across Vannevar to leverage state-of-the-art models for mission-critical enrichment workflows. We own the end-to-end ML platform, from training and fine-tuning models to deploying high-performance inference services, and we operate these capabilities in demanding production environments.</p> <p>You will be a technical leader driving the development of scalable ML services for enrichment. You'll work across the full ML lifecycle, from experimenting with and training models using frameworks like PyTorch, TensorFlow, and Hugging Face, to deploying optimized inference services using ONNX, vLLM, and other deployment libraries. You'll partner with product teams to understand enrichment requirements, architect robust ML pipelines that handle large-scale data processing, and ensure our services meet strict performance and reliability standards in production.</p> <h2>What you'll do</h2> <ul> <li>Design and build scalable ML services for enrichment workflows, including model training pipelines and high-performance inference APIs</li> <li>Deploy and optimize models using modern inference libraries and frameworks (ONNX, vLLM, TensorRT, etc.) to achieve low-latency, high-throughput performance</li> <li>Collaborate with software engineers and product teams to define data requirements, feature engineering strategies, and model evaluation metrics</li> <li>Build robust monitoring, observability, and evaluation systems to ensure model quality and service reliability in production</li> <li>Stay current with emerging ML techniques, tools, and best practices, particularly in areas like model optimization, efficient inference, and large-scale data processing</li> </ul> <h2><strong>What we look for</strong></h2> <ul> <li>5+ years of experience building and deploying machine learning systems in production environments</li> <li>Strong proficiency with model deployment technologies (Kubernetes, Ray, etc.) and inference libraries (ONNX, vLLM, TensorRT, or similar). Proficiency with model training frameworks (PyTorch, TensorFlow, Jax)</li> <li>You've successfully designed and scaled ML services that process large volumes of data and serve predictions with strict latency and throughput requirements</li> <li>Experience with the full ML lifecycle, including data preprocessing, feature engineering, model training, evaluation, deployment, and monitoring</li> <li>Solid software engineering skills, including experience with distributed systems, APIs, and cloud infrastructure</li> <li>You have a passion for building reliable, performant ML systems and understa ... (truncated, view full listing at source)