Model Policy Lead - Video Policy, Trust & Safety

Singapore, Singapore

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Apply now Apply later

TikTok’s Trust & Safety team is seeking a Model Policy Lead for Short Video and Photo to govern how enforcement policies are implemented, maintained, and optimized across both large-scale ML classifiers and LLM-based moderation systems. You will lead a team at the center of AI-driven Trust and Safety enforcement - building Chain-of-Thought policy logic, RCA and quality pipelines, and labeling strategies that ensure our automated systems are both accurate at scale and aligned with platform standards.

This role combines technical judgment, operational rigor, and policy intuition. You'll work closely with Engineering, Product and Ops teams to manage how policy is embedded in model behavior, measured through our platform quality metrics, and improved through model iterations and targeted interventions. You’ll also ensure that policy changes - often made to improve human reviewer precision - are consistently iterated across all machine enforcement pathways, maintaining unified and transparent enforcement standards.

You will lead policy governance across four model enforcement streams central to TikTok’s AI moderation systems:

1. At-Scale Moderation Models (ML Classifiers) - Own policy alignment and quality monitoring for high-throughput classifiers processing hundreds of millions of videos daily. These models rely on static training data and operate without prompt logic - requiring careful threshold setting, false positive/negative analysis, and drift tracking.
2. At-Scale AI Moderation (LLM/CoT-Based) - Oversee CoT-based AI moderation systems handling millions of cases per day. Your team produces CoT, structured labeling guidelines and dynamic prompts to interpret complex content and provide a policy assessment. Your team will manage accuracy monitoring, labeling frameworks, and precision fine-tuning.
3. Model Change Management - Ensure consistent enforcement across human and machine systems as policies evolve. You will lead the synchronization of changes across ML classifiers, AI models, labeling logic, and escalation flows to maintain unified, up-to-date enforcement standards.
4. Next-Bound AI Projects (SOTA Models) - Drive development of high-accuracy, LLM-based models used to benchmark and audit at-scale enforcement. These projects are highly experimental, and are at the forefront of LLM-application in real world policy enforcement and quality validation.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Leadership Jobs

Tags: Engineering LLMs Machine Learning Pipelines

Region: Asia/Pacific
Country: Singapore

More jobs like this