Research Engineer, Multimodal Reinforcement Learning

Google Deepmind
Zurich, SwitzerlandPosted 12 March 2026

Job Description

Snapshot Are you a Research Engineer with a passion for Reinforcement Learning and Multimodality? Join Google DeepMind’s Frontier AI Unit ! We are seeking a researcher to help us make learning efficient through conversational environments. While text-based reasoning has shown immense promise, we are moving the frontier toward image-grounded, multimodal, and retrieval-augmented conversational setups. You will bridge the gap between conversational learning and the visual domain, applying the latest RL methods to create scalable, semi-verifiable environments that power the next generation of our models (e.g., Gemini). About us Google DeepMind: Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. Frontier AI Unit: The Frontier AI Unit is responsible for building and scaling the next generation of our core models. Within this group, our team focuses on "conversationality" as a mechanism for efficient learning. We believe that learning conversationally transfers between environments. We are moving beyond Chain-of-Thought (CoT) and text-only setups to build multimodal, multi-turn reasoning capabilities, leveraging an ecosystem of autoraters and autousers to scale environment creation. The role We have strong evidence that conversational environments lead to better learning in a transferable way. However, we need to go beyond text. As a Research Engineer, you will play a pivotal role in expanding Meta Reinforcement Learning to multimodal setups. You will help us leapfrog current industry benchmarks by extending our focus from verifiable domains to semi-verifiable, multimodal domains (e.g., Lens, Image-grounded reasoning). This is an ecosystem play: you will leverage our advantages in autoraters and autousers to scale the creation of these conversational environments. You will be the bridge between the core conversational work and the specifics of grounding in the visual domain, moving our training infra from static data towards dynamic, multi-turn environments. Key responsibilities Multimodal RL Research: Design and implement novel RL algorithms that enable multi-turn reasoning and learning in multimodal (text + vision) environments. Environment Scaling: Contribute to the "ecosystem" of autoraters and autousers, building the infrastructure needed to generate high-quality, semi-verifiable training environments at scale. Strategic Application: Apply state-of-the-art methods to solve strategic problems, specifically closing the gap between single-turn and multi-turn embeddings (retrieval-augmented reasoning). Experimentation Analysis: track, interpret, and analyze complex experiments, providing scientific rigor to our training pipelines. Collaboration: Act as a connector between teams (Google Research, Core, GDM GenAI), helping to build shared pipelines for conversational infrastructure that serve product needs in Search, Lens, and YouTube. What We Can Offer You Scientific Contribution: The opportunity to publish and contribute to the scientific community, specifically in the high-impact intersection of RL, Multimodality, and Reasoning. Scale Resources: Access to world-class compute and the existing infrastructure of autoraters/autousers, allowing you to focus on innovation rather than building from scratch. Direct Impact: Your work will directly influence the reasoning capabilities of Google’s flagship models (Gemini), moving the needle on how models learn and interact with the world. Collaborative Culture: Work alongside world-leading experts in RL and Generative AI in a supportive, growth-oriented environment. About you We are looking for a Research Engineer who is not just technically proficien ... (truncated, view full listing at source)