Safeguards Analyst, User Well-being
San Francisco, CA | New York City, NY
Full Time Entry-level / Junior USD 170K - 200K
Anthropic
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
As a Safeguards Analyst, you will be responsible for building and executing enforcement workflows for our products and services, with a focus on detecting and mitigating potential harmful use. As a member of the user well-being team, your initial focus will be on expanding child safety enforcement workflows; however, this position may later expand to include broader areas of enforcement. Safety is core to our mission and you’ll help shape policy enforcement so that our users can safely interact with and build on top of our products in a harmless, helpful and honest way.
*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.
Responsibilities:
- Design and architect automated enforcement systems and review workflows that scale effectively while maintaining high accuracy
- Partner with Engineering and Data Science teams to optimize detection models for policy violations and automated enforcement systems
- Review flagged content to drive enforcement and policy improvements
- Enforce usage policies with a focus on detecting and mitigating potential harmful use of AI systems
- Support the Safeguards policy design team by providing detailed feedback on policy gaps based on real enforcement scenarios
- Keep up to date with emerging AI policy enforcement best practices, and use these to inform our decision-making and workflows
You may be a good fit if you have experience:
- Standing up and scaling policy enforcement and review workflows
- Using SQL and/or other data analysis tools to draw insights from large datasets
- Identifying emerging risks and threat actors, and providing feedback to a diverse sets of stakeholders, such as Product, Policy, Engineering, and Legal teams
- Working with generative AI products, including writing effective prompts for content review and enforcement
- Understanding the challenges that exist in implementing product policies at scale, including in the content moderation space
- Maintaining strong collaboration with team members while navigating rapidly evolving priorities and workstreams
- Navigating evolving regulatory landscapes and enforcement best practices with regards to age assurance, CSAM/CSEM, NCII, and digital well-being
- As a trust & safety professional or subject matter expert working in one or more of the following focus areas: Child safety, human exploitation and abuse, and/or content classification systems.
The expected salary range for this position is:
Annual Salary:$170,000—$200,000 USDLogistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Tags: Anthropic Biology Classification Computer Science Data analysis Engineering Generative AI GPT GPT-3 Physics Research SQL
Perks/benefits: Career development Competitive pay Equity / stock options Flex hours Flex vacation Parental leave
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.