Anthropic AI Safety Fellow

Anthropic
London, UK; Ontario, CA; Remote-Friendly, United States; San Francisco, CAPosted 21 January 2026

Job Description

<div class="content-intro"><h2><strong>About Anthropic</strong></h2> <p>Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p></div><p><span style="font-size: 10pt;"><strong><em>Please apply by January 12, 2026 using <a href="https://constellation.fillout.com/anthropicfellows">this link</a></em></strong></span></p> <h1><span style="font-size: 18pt;"><strong>Anthropic Fellows Program Overview</strong></span></h1> <p>The Anthropic Fellows Program is designed to accelerate AI safety research and foster research talent. We provide funding and mentorship to promising technical talent - regardless of previous experience - to research the frontier of AI safety for four months.</p> <p>Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a <strong>public output</strong> (e.g. a paper submission). In our previous cohorts, over 80% of fellows produced papers (more below).&nbsp;</p> <p>We run multiple cohorts of Fellows each year. This application is for our next two cohorts, starting in May and July 2026.</p> <h1><span style="font-size: 18pt;"><strong>What to Expect</strong></span></h1> <ul> <li>Direct mentorship from Anthropic researchers&nbsp;</li> <li>Access to a shared workspace (in either Berkeley, California or London, UK)</li> <li>Connection to the broader AI safety research community</li> <li>Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD &amp; access to benefits (benefits vary by country)</li> <li>Funding for compute (~$15k/month) and other research expenses</li> </ul> <h1><span style="font-size: 18pt;"><strong>Mentors, Research Areas, &amp; Past Projects</strong></span></h1> <p>Fellows will undergo a project selection &amp; mentor matching process. Potential mentors amongst others include:</p> <ul> <li>Jan Leike</li> <li>Sam Bowman</li> <li>Sara Price</li> <li>Alex Tamkin</li> <li>Nina Panickssery</li> <li>Trenton Bricken</li> <li>Logan Graham</li> <li>Jascha Sohl-Dickstein</li> <li>Nicholas Carlini</li> <li>Joe Benton</li> <li>Collin Burns</li> <li>Fabien Roger</li> <li>Samuel Marks</li> <li>Kyle Fish</li> <li>Ethan Perez</li> </ul> <p>Our mentors will lead projects in select AI safety research areas, such as:</p> <ul> <li><strong>Scalable Oversight:</strong> Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.</li> <li><strong>Adversarial Robustness and AI Control:</strong> Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.</li> <li><strong>Model Organisms:</strong> Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.</li> <li><strong>Model Internals / Mechanistic Interpretability:</strong> Advancing our understanding of the internal workings of large language models to enable more targeted interventions and safety measures.</li> <li><strong>AI Welfare:</strong> Improving our understanding of potential AI welfare and developing related evaluations and mitigations.</li> </ul> <p>On our <a href="https://alignment.anthropic.com/">Alignment Science</a> and <a href="https://red.anthropic.com/">Frontier Red Team</a> blogs, you can read about past projects, including:</p> <ul> <li><a href="https://red.anthropic.com/2025/smart-contracts/">AI agents find $4.6M in blockchain smart contract exploits:</a>&nbsp;Winnie Xiao and Cole Killian, mentored by Nicholas Carlini and Alwin Peng</li> <li><a href="https://alignment.anthropic.com/2025/subliminal-learning/">Subliminal Learning: Language Models Transmit Behavioral Traits via Hi ... (truncated, view full listing at source)
Apply Now

Direct link to company career page

AI Resume Fit Check

See exactly which skills you match and which are missing before you apply. Free, instant, no spam.

Check my resume fit

Free · No credit card

Share