Model Policy Lead, Video Policy - Trust and Safety

San Jose, California, United States

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Apply now Apply later

TikTok’s Trust & Safety team is seeking a Model Policy Lead for Short Video and Photo to govern how enforcement policies are implemented, maintained, and optimized across both large-scale ML classifiers and LLM-based moderation systems. You will lead a team at the center of AI-driven Trust and Safety enforcement - building Chain-of-Thought policy logic, RCA and quality pipelines, and labeling strategies that ensure our automated systems are both accurate at scale and aligned with platform standards.

This role combines technical judgment, operational rigor, and policy intuition. You'll work closely with Engineering, Product and Ops teams to manage how policy is embedded in model behavior, measured through our platform quality metrics, and improved through model iterations and targeted interventions. You’ll also ensure that policy changes - often made to improve human reviewer precision - are consistently iterated across all machine enforcement pathways, maintaining unified and transparent enforcement standards.

You will lead policy governance across four model enforcement streams central to TikTok’s AI moderation systems: 1. At-Scale Moderation Models (ML Classifiers) - Own policy alignment and quality monitoring for high-throughput classifiers processing hundreds of millions of videos daily. These models rely on static training data and operate without prompt logic - requiring careful threshold setting, false positive/negative analysis, and drift tracking; 2. At-Scale AI Moderation (LLM/CoT-Based) - Oversee CoT-based AI moderation systems handling millions of cases per day. Your team produces CoT, structured labeling guidelines and dynamic prompts to interpret complex content and provide a policy assessment. Your team will manage accuracy monitoring, labeling frameworks, and precision fine-tuning; 3. Model Change Management - Ensure consistent enforcement across human and machine systems as policies evolve. You will lead the synchronization of changes across ML classifiers, AI models, labeling logic, and escalation flows to maintain unified, up-to-date enforcement standards; 4. Next-Bound AI Projects (SOTA Models) - Drive development of high-accuracy, LLM-based models used to benchmark and audit at-scale enforcement. These projects are highly experimental, and are at the forefront of LLM-application in real world policy enforcement and quality validation.

Together, these streams define TikTok’s model-led enforcement infrastructure. Your role is to close the quality gap - ensuring that scale does not come at the cost of precision, and that every AI decision reflects a consistent, up-to-date, and defensible application of policy.

This is a high-impact leadership role that requires strong policy intuition, data fluency, and a deep curiosity for how AI technologies shape the future of Trust and Safety. You’ll work closely with stakeholders across Product, Engineering, Product, Responsible AI, Ops, and Policy.

Responsibilities:
- Lead a team of Policy Analysts responsible for model governance across ML classifiers and LLM-based AI moderation systems.
- Translate human moderation policies into model-readable logic - including Chain-of-Thought Decision Trees, labeling frameworks, and prompt design standards.
- Own model performance tracking through key enforcement metrics, and drive RCA cycles to identify and close quality gaps.
- Oversee policy alignment for large-scale classifiers and LLM moderation, ensuring enforcement consistency across hundreds of millions of daily content reviews.
- Build and maintain labeling systems for CoT-based AI models, including quality testing, iteration workflows, resource planning.
- Lead cross-system change management, ensuring that policy iterations are reflected consistently across human reviewers, classifiers, and AI models.
- Guide the development of next-bound SOTA models, defining policy goals, labelling requirements, and use-case applications.
- Partner with Engineering, Product, Ops, and Policy to align on enforcement strategy, rollout coordination, and long-term model enforcement and detection priorities.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Leadership Jobs

Tags: Engineering LLMs Machine Learning Pipelines Responsible AI Testing

Region: North America
Country: United States

More jobs like this