Safeguards Account Abuse Lead
San Francisco, CA | New York City, NY | Seattle, WA
Full Time Senior-level / Expert USD 200K - 250K
Anthropic
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
As the Safeguards Lead for Account Abuse, you will drive our cross-functional efforts to protect platform integrity, prevent financial abuse, and ensure sustainable growth through effective abuse prevention strategies. This is a Policy Operations Manager role focused on strategic cross-functional leadership, working closely with Product, Engineering, and Data Science.
Important Context: In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility approximately once a quarter.
Responsibilities:
- Manage team of AI Safeguards technical analysts
- Serve as the subject matter expert for account abuse prevention, providing guidance and expertise across teams
- Design and architect automated enforcement systems that scale effectively while maintaining high accuracy
- Partner with Engineering and Data Science teams to optimize detection models and automated enforcement systems
- Drive compliance with payment requirements and industry standards for fraud prevention
- Lead implementation of mitigations for abuse patterns and develop strategic response plans future abuse vectors
- Design and optimize appeals workflows and user communication systems
- Define and track key metrics to measure effectiveness and drive data-informed decisions
You may be a good fit if you have:
- 7+ years of experience in Risk Management, Trust & Safety, Safeguards, or related fields
- Excellent people management skills
- Deep technical expertise in building and scaling abuse prevention programs
- Strong understanding of payment systems, fraud patterns, and compliance requirements
- Proven track record of driving complex technical projects across multiple teams
- Advanced skills in risk analysis, data analysis, and metric-driven decision making
- Strong technical background with understanding of machine learning and automation systems
- Expertise in SQL, Python or data analysis tools
- Excellence in technical communication and stakeholder collaboration
- Communication skills
The expected salary range for this position is:
Annual Salary:$200,000—$250,000 USDLogistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Tags: Anthropic Biology Computer Science Data analysis Engineering GPT GPT-3 Machine Learning Physics Python Research SQL
Perks/benefits: Career development Competitive pay Equity / stock options Flex hours Flex vacation Parental leave
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.