Data Scientist, Responsible Development and Innovation

Google Deepmind
New York City, New York, US$166k – $244kPosted 26 March 2026

Job Description

Snapshot As a data scientist in Responsible Development and Innovation (ReDI) at Google DeepMind, you will be working with a diverse team to develop and deliver evaluations and analysis in established and emerging policy areas for Google DeepMind’s most groundbreaking models. You will work with teams at Google DeepMind along with internal and external partners to ensure that our work is conducted in line with responsibility and safety best practices, helping Google DeepMind to progress towards its mission. About Us Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. The Role As a data scientist working in ReDI, you’ll be part of a team developing and implementing key safety evaluations in both established and emerging policy areas. You will be working to develop and implement new evaluations and experiments, defining new metrics and analytical processes to support internal and external safety reporting of both quantitative and qualitative data, and generally supporting the team by embodying data and analytics best practices. You’ll be supporting the team across the full range of development, from running early analysis to developing higher-level frameworks and reports. Note that this role works with sensitive content or situations and may be exposed to graphic, controversial, and/or upsetting topics or content. Key responsibilities Developing new metrics and analytics approaches in key risk areas comprising both quantitative and qualitative data. Assessing the quality and coverage of evaluation datasets and methods. Influencing the design and development of future evaluations, and leading efforts to define novel testing and experimentation approaches. Converting high-level problems into detailed analytics plans, implementing those plans, and influencing others to support as necessary. Working with multidisciplinary specialists to measure and improve the quality of evaluation outputs. Contributing to and running evaluations and reporting pipelines. Communicating with wider stakeholders across Responsibility, Google DeepMind, Google, and third parties where appropriate. Providing an expert perspective on data usage, narrative, and interpretation in diverse projects and contexts. In order to set you up for success in this role, we are looking for the following skills and experience: Strong analytical and statistical skills, with experience in metric design and development. Strong command of Python and SQL. Ability to work with both quantitative and qualitative data, understanding the strengths and weaknesses of each in specific contexts. Ability to present analysis and findings to both technical and non-technical teams, including senior stakeholders. A track record of transparency, with a demonstrated ability to identify limitations in datasets and analyses and communicate these effectively. Familiarity with AI evaluations and broader experimentation principles. Demonstrated ability to work within and lead cross-functional teams, fostering collaboration, and influencing outcomes. Ability to thrive in a fast-paced environment with a willingness to pivot to support emerging needs. In addition, the following would be an advantage: Experience working with sensitive data, access control, and procedures for data worker wellbeing. Experience working in safety or security contexts (for example content safety or cybersecurity). Experience with safety evaluations and mitigations of advanced AI systems. Experience with a range of experimentation and evaluation techniques, such as human study research, AI or product red-teaming, and content ... (truncated, view full listing at source)
Apply Now

Direct link to company career page

AI Resume Fit Check

See exactly which skills you match and which are missing before you apply. Free, instant, no spam.

Check my resume fit

Free · No credit card

Share