Research Engineer / Scientist (3D Reconstruction)

WorldLabs
San Francisco$250k – $350kPosted 5 March 2026

Job Description

<h2><strong>About World Labs:</strong></h2> <p>We build foundational world models that can perceive, generate, reason, and interact with the 3D world — unlocking AI's full potential through spatial intelligence by transforming seeing into doing, perceiving into reasoning, and imagining into creating. </p> <p>We believe spatial intelligence will unlock new forms of storytelling, creativity, design, simulation, and immersive experiences across both virtual and physical worlds.</p> <p>We bring together a world-class team, united by a shared curiosity, passion, and deep backgrounds in technology — from AI research to systems engineering to product design — creating a tight feedback loop between our cutting-edge research and products that empower our users.</p> <p> </p> <h2><strong>Role Overview</strong></h2> <p>We’re looking for a 3D Reconstruction Specialist to develop and advance state-of-the-art methods for reconstructing high-quality 3D geometry and appearance from real-world data. This role is focused on modern reconstruction techniques—both feed-forward and optimization-based—with an emphasis on novel representations, robust optimization, and scalable training and inference pipelines.</p> <p>This is a hands-on, research-driven role for someone who enjoys working at the intersection of computer vision, graphics, and machine learning. You’ll collaborate closely with research scientists, ML engineers, and product teams to translate cutting-edge reconstruction ideas into production-ready systems that power core product capabilities.</p> <p> </p> <h2><strong>What You Will Do:</strong></h2> <ul> <li>Design and implement modern 3D reconstruction systems, including feed-forward and optimization-based approaches for geometry, appearance, and scene understanding.</li> <li>Research, prototype, and productionize advanced 3D representations (e.g., implicit functions, point-based or volumetric methods, hybrid representations) with a focus on accuracy, efficiency, and scalability.</li> <li>Develop and improve optimization pipelines for multi-view reconstruction, including camera pose estimation, joint geometry/appearance optimization, and robust loss formulations.</li> <li>Build end-to-end training and evaluation workflows for 3D reconstruction models, from data preparation and supervision strategies to large-scale experiments and metrics.</li> <li>Collaborate with data and infrastructure teams to ensure reconstruction methods integrate cleanly with existing 3D data pipelines, rendering systems, and downstream applications.</li> <li>Analyze failure modes and data quality issues in real-world reconstruction scenarios, and design principled solutions to improve robustness and generalization.</li> <li>Optimize performance across the stack, including memory usage, training speed, and inference latency, to support large-scale datasets and production constraints.</li> <li>Contribute to technical direction by proposing new research ideas, mentoring teammates, and helping set best practices for 3D reconstruction across the organization.</li> </ul> <h2><strong>Key Qualifications:</strong></h2> <ul> <li>6+ years of experience working on 3D reconstruction, multi-view geometry, or related areas in computer vision, graphics, or machine learning.</li> <li>Strong foundation in modern 3D reconstruction techniques, including feed-forward neural methods or optimization-based approaches.</li> <li>Deep experience with 3D representations and their tradeoffs (e.g., implicit fields, point-based methods, meshes, volumes) or with large-scale optimization pipelines for reconstruction.</li> <li>Proficiency in Python and/or C++, with hands-on experience building research or production systems.</li> <li>Experience with deep learning frameworks (e.g., PyTorch) and numerical optimization tools.</li> <li>Familiarity with rendering, differentiable rendering, or graphics pipelines, and how they interact with reconstruction systems.</li> <li>Proven ability to work in ambigu ... (truncated, view full listing at source)