Model Policy Lead - Video Policy, Trust & Safety
Singapore, Singapore
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
TikTok’s Trust & Safety team is seeking a Model Policy Lead for Short Video and Photo to govern how enforcement policies are implemented, maintained, and optimized across both large-scale ML classifiers and LLM-based moderation systems. You will lead a team at the center of AI-driven Trust and Safety enforcement - building Chain-of-Thought policy logic, RCA and quality pipelines, and labeling strategies that ensure our automated systems are both accurate at scale and aligned with platform standards.
This role combines technical judgment, operational rigor, and policy intuition. You'll work closely with Engineering, Product and Ops teams to manage how policy is embedded in model behavior, measured through our platform quality metrics, and improved through model iterations and targeted interventions. You’ll also ensure that policy changes - often made to improve human reviewer precision - are consistently iterated across all machine enforcement pathways, maintaining unified and transparent enforcement standards.
You will lead policy governance across four model enforcement streams central to TikTok’s AI moderation systems:
1. At-Scale Moderation Models (ML Classifiers) - Own policy alignment and quality monitoring for high-throughput classifiers processing hundreds of millions of videos daily. These models rely on static training data and operate without prompt logic - requiring careful threshold setting, false positive/negative analysis, and drift tracking.
2. At-Scale AI Moderation (LLM/CoT-Based) - Oversee CoT-based AI moderation systems handling millions of cases per day. Your team produces CoT, structured labeling guidelines and dynamic prompts to interpret complex content and provide a policy assessment. Your team will manage accuracy monitoring, labeling frameworks, and precision fine-tuning.
3. Model Change Management - Ensure consistent enforcement across human and machine systems as policies evolve. You will lead the synchronization of changes across ML classifiers, AI models, labeling logic, and escalation flows to maintain unified, up-to-date enforcement standards.
4. Next-Bound AI Projects (SOTA Models) - Drive development of high-accuracy, LLM-based models used to benchmark and audit at-scale enforcement. These projects are highly experimental, and are at the forefront of LLM-application in real world policy enforcement and quality validation.
This role combines technical judgment, operational rigor, and policy intuition. You'll work closely with Engineering, Product and Ops teams to manage how policy is embedded in model behavior, measured through our platform quality metrics, and improved through model iterations and targeted interventions. You’ll also ensure that policy changes - often made to improve human reviewer precision - are consistently iterated across all machine enforcement pathways, maintaining unified and transparent enforcement standards.
You will lead policy governance across four model enforcement streams central to TikTok’s AI moderation systems:
1. At-Scale Moderation Models (ML Classifiers) - Own policy alignment and quality monitoring for high-throughput classifiers processing hundreds of millions of videos daily. These models rely on static training data and operate without prompt logic - requiring careful threshold setting, false positive/negative analysis, and drift tracking.
2. At-Scale AI Moderation (LLM/CoT-Based) - Oversee CoT-based AI moderation systems handling millions of cases per day. Your team produces CoT, structured labeling guidelines and dynamic prompts to interpret complex content and provide a policy assessment. Your team will manage accuracy monitoring, labeling frameworks, and precision fine-tuning.
3. Model Change Management - Ensure consistent enforcement across human and machine systems as policies evolve. You will lead the synchronization of changes across ML classifiers, AI models, labeling logic, and escalation flows to maintain unified, up-to-date enforcement standards.
4. Next-Bound AI Projects (SOTA Models) - Drive development of high-accuracy, LLM-based models used to benchmark and audit at-scale enforcement. These projects are highly experimental, and are at the forefront of LLM-application in real world policy enforcement and quality validation.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Job stats:
1
0
0
Category:
Leadership Jobs
Tags: Engineering LLMs Machine Learning Pipelines
Region:
Asia/Pacific
Country:
Singapore
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.
Business Intelligence Developer jobsSr. Data Engineer jobsData Scientist II jobsBI Developer jobsPrincipal Data Engineer jobsStaff Data Scientist jobsStaff Machine Learning Engineer jobsPrincipal Software Engineer jobsDevOps Engineer jobsData Science Intern jobsJunior Data Analyst jobsSoftware Engineer II jobsData Manager jobsData Science Manager jobsStaff Software Engineer jobsAI/ML Engineer jobsLead Data Analyst jobsBusiness Data Analyst jobsData Analyst Intern jobsSr. Data Scientist jobsData Specialist jobsBusiness Intelligence Analyst jobsData Engineer III jobsData Governance Analyst jobsSenior Backend Engineer jobs
Consulting jobsMLOps jobsAirflow jobsOpen Source jobsEconomics jobsKafka jobsLinux jobsGitHub jobsKPIs jobsTerraform jobsJavaScript jobsPrompt engineering jobsRAG jobsPostgreSQL jobsBanking jobsStreaming jobsScikit-learn jobsClassification jobsRDBMS jobsNoSQL jobsData Warehousing jobsComputer Vision jobsPhysics jobsdbt jobsGoogle Cloud jobs
Hadoop jobsPandas jobsLangChain jobsScala jobsR&D jobsGPT jobsBigQuery jobsData warehouse jobsMicroservices jobsDistributed Systems jobsReact jobsScrum jobsELT jobsCX jobsOracle jobsLooker jobsIndustrial jobsPySpark jobsOpenAI jobsRedshift jobsJira jobsSAS jobsRobotics jobsTypeScript jobsUnstructured data jobs