Strategic Risk Analyst

OpenAI
San FranciscoPosted 28 February 2026

Job Description

Strategic Risk Analyst ABOUT THE TEAM The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analyzing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI's overarching goal of developing AI that benefits humanity. We are building a horizontal “radar” for AI abuse and strategic risk—correlating internal signals, external intelligence, and real-world events into clear, actionable priorities for OpenAI’s safety and product decision-makers. ABOUT THE ROLE As a Strategic Risk Analyst, you will help develop and maintain our central view of strategic risk across OpenAI’s products and platforms. You will synthesize internal abuse patterns, upstream and external intelligence, and product and conversational signals into decision-ready risk insights, recurring briefs, and practical prioritization inputs You will partner closely with investigators, engineers, and policy and trust and safety counterparts, as well as measurement and forecasting teammates, to translate messy signals into structured judgments (including assumptions and confidence), ranked priorities, and actionable recommendations. This is an opportunity to do high-leverage analysis in a fast-moving environment, where crisp thinking and communication directly shape safety decisions, mitigations, and product readiness. IN THIS ROLE, YOU WILL - Monitor and analyze internal risk signals (abuse telemetry, investigations outputs, model and product signals) to identify trends, shifts in tactics, and new abuse patterns. - Conduct upstream and external scanning (OSINT, ecosystem developments, real-world events) and distill implications for OpenAI’s products and threat landscape. - Identify and deep dive into harms and misuse across products and channels, turning messy signals into clear analytic findings. - Connect individual incidents into system-level narratives about actors, incentives, product design weaknesses, and cross-product spillover—pressure-testing hypotheses early. - Produce concise, decision-ready risk briefs and intelligence estimates with explicit assumptions, confidence levels, and what would change the assessment. - Convert analysis into clear, ranked priorities and actionable recommendations that product, safety, and policy teams can execute on. - Define and track key risk indicators and outcome metrics to evaluate whether mitigations are working and drive course corrections when needed. - Build early-warning and monitoring capabilities with data, engineering, and visualization partners, including dashboards that highlight leading indicators and unusual changes. - Contribute to product readiness and launch reviews; develop reusable playbooks, FAQs, and briefing materials that help teams respond consistently. - Drive cross-functional alignment by tailoring readouts to investigations, engineering, policy, trust and safety, and product stakeholders—and ensuring decisions and follow-ups are crisp. YOU MIGHT THRIVE IN THIS ROLE IF YOU - Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence work. - Demonstrated ability to analyze complex online harms and AI-enabled misuse (e.g., harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues) and convert analysis into concrete, prioritized recommendations. - Strong analytical craft: you can identify weak signals, form hypotheses, test them quickly, state assumptions explicitly, and communicate confidence and uncertainty clearly. - Comfort working across qualitative and quantitative inputs, including (1) casework, incident reports, OSINT, product context, and policy frameworks, and (2) basic metrics and trends in partnership with data science (e.g., harm ... (truncated, view full listing at source)
Apply Now

Direct link to company career page

Share this job