Research Scientist (3D Diffusion)
WorldLabsSan FranciscoPosted 5 March 2026
Job Description
<h2><strong>About World Labs:</strong></h2>
<p>We build foundational world models that can perceive, generate, reason, and interact with the 3D world — unlocking AI's full potential through spatial intelligence by transforming seeing into doing, perceiving into reasoning, and imagining into creating. </p>
<p>We believe spatial intelligence will unlock new forms of storytelling, creativity, design, simulation, and immersive experiences across both virtual and physical worlds.</p>
<p>We bring together a world-class team, united by a shared curiosity, passion, and deep backgrounds in technology — from AI research to systems engineering to product design — creating a tight feedback loop between our cutting-edge research and products that empower our users.</p>
<h2><strong>Role Overview</strong></h2>
<p>We’re looking for a Research Scientist focused on 3D Sparse Diffusion to develop next-generation generative models that operate natively in 3D or over sparse, structured representations. This role is for someone excited about pushing the frontier of diffusion-based generative modeling beyond dense grids—into point clouds, implicit representations, multi-view observations, and hybrid 2D/3D formulations.</p>
<p>This is a research-forward, hands-on role at the intersection of generative modeling, 3D representations, and scalable learning systems. You’ll work closely with other research scientists and engineers to invent, evaluate, and deploy diffusion models that power high-fidelity 3D generation, reconstruction, and editing in real-world product settings.</p>
<h2><strong>What You Will Do:</strong></h2>
<ul>
<li>Research and develop 3D-native and sparse diffusion models for generating and refining geometry, appearance, and scene structure.</li>
<li>Design diffusion processes over sparse or structured domains (e.g., point clouds, implicit fields, multi-view features, hybrid representations) with an emphasis on efficiency and fidelity.</li>
<li>Explore novel noise schedules, conditioning strategies, and sampling algorithms tailored to 3D and sparse data.</li>
<li>Build end-to-end training pipelines for large-scale diffusion models, including data preparation, supervision strategies, and evaluation metrics.</li>
<li>Collaborate with 3D reconstruction and modeling teams to integrate diffusion-based components into broader systems for generation, reconstruction, and editing.</li>
<li>Analyze model behavior and failure modes specific to sparse and 3D settings, and propose principled improvements to robustness and controllability.</li>
<li>Optimize training and inference performance, balancing sample quality, compute efficiency, and scalability.</li>
<li>Contribute to the team’s research output through publications, technical reports, and internal knowledge sharing.</li>
<li>Stay current with—and help shape—emerging research directions in generative modeling, diffusion, and 3D learning.</li>
</ul>
<h2><strong>Key Qualifications:</strong></h2>
<ul>
<li>5+ years of experience in generative modeling, 3D learning, or related areas within machine learning research.</li>
<li>Hands-on experience designing or training diffusion models, with demonstrated work on 3D-native, sparse, or structured representations.</li>
<li>Strong background in modern 3D representations (e.g., point-based, implicit, volumetric, or hybrid) and their interaction with learning-based models.</li>
<li>Proficiency in Python and deep learning frameworks (e.g., PyTorch), with experience building research-grade training and evaluation code.</li>
<li>Solid understanding of probabilistic modeling, optimization, and large-scale training dynamics.</li>
<li>Experience publishing at top-tier venues or contributing to influential research or open-source projects in generative modeling or 3D.</li>
<li>Ability to operate independently in ambiguous research spaces, from idea formulation through experimental validation.</li>
<li>Strong scientific communication skills and a bias toward clarit ... (truncated, view full listing at source)
Apply Now
Direct link to company career page