Software Engineer, Account Abuse

Anthropic
San Francisco, CAPosted 24 February 2026

Job Description

<div class="content-intro"><h2><strong>About Anthropic</strong></h2> <p>Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p></div><h2 class="heading"><strong>About the role</strong></h2> <p>The Account Abuse team is tasked with ensuring Anthropic’s computing capacity is allocated fairly, minimizing resources available to bad actors and preventing them from coming back. As a software engineer on this team, you will build systems that gather and analyze signals at scale, balancing tradeoffs and coordinating closely with stakeholder teams throughout the company. The ideal candidate can see things from opponents’ perspectives, understand their means and motives, and anticipate their responses to countermeasures.</p> <h2 class="heading"><strong>Responsibilities:</strong></h2> <ul> <li>Ability to think and respond quickly in a rapidly-changing greenfield environment</li> <li>Jumping into other teams’ code to identify key points to gather signals or introduce interventions with minimal impact on their systems’ stability, complexity, or overall architecture</li> <li>Integration with third-party data-enrichment vendors</li> <li>Creating monitoring dashboards, alerts, and internal admin UX</li> <li>Working closely with our data scientists to maintain situational awareness of our current usage patterns and trends, and with our Policy Enforcement team to maximize the impact of their human-review availability</li> <li>Building robust and reliable multi-layered defenses</li> <li>Lead root cause analyses and deep-dive investigations into account activity to identify abuse patterns, uncover emerging attack vectors, and inform both immediate enforcement actions and longer-term systemic defenses</li> </ul> <h2 class="heading"><strong>You may be a good fit if you have:</strong></h2> <ul> <li>A Bachelor’s degree in Computer Science, Software Engineering or comparable experience</li> <li>3-10+ years of experience in a software engineering position, preferably with a focus on integrity, spam, fraud, or abuse detection.</li> <li>Proficiency in Python, SQL, and data analysis tools.</li> <li>Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li> </ul> <h2 class="heading"><strong>Strong candidates may also:</strong></h2> <ul> <li>Have experience building trust and safety mechanisms for and using AI/ML systems, such as fraud-detection models or security monitoring tools or the infrastructure to support these systems at scale</li> <li>Have worked closely with operational teams to build custom internal tooling</li> </ul><div class="content-pay-transparency"><div class="pay-input"><div class="description"><p>The annual compensation range for this role is listed below. </p> <p>For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p></div><div class="title">Annual Salary:</div><div class="pay-range"><span>$320,000</span><span class="divider"></span><span>$405,000 USD</span></div></div></div><div class="content-conclusion"><h2><strong>Logistics</strong></h2> <p><strong>Education requirements: </strong>We require at least a Bachelor's degree in a related field or equivalent experience.<strong><br><br>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p> <p><strong data-stringify-type="bold">Visa sponsorship:</strong> We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we wil ... (truncated, view full listing at source)