Machine Learning Engineer Intern (Trust and Safety - CV/NLP/Multimodal LLM) - 2026 Summer(BS/MS)

San Jose, California, United States

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Apply now Apply later

The algorithm team is responsible for developing state-of-the-art computer vision, NLP and multimodality models and algorithms to protect our platform and users from the content and behaviors that violate community guidelines and related regulations. With the continuous efforts from our team, TikTok is able to provide the best user experience and bring joy to everyone in the world.

In our team, you will have the opportunity to participate in the development of the cutting-edge content understanding model to help improve the recognition ability of violated content in TikTok, and will also be responsible for optimizing our distributed model training framework continuously.
We are looking for talented individuals to join us for an internship in 2026. Internships at TikTok aim to offer students industry exposure and hands-on experience. Turn your ambitions into reality as your inspiration brings infinite opportunities at TikTok.

Internships at TikTok aim to provide students with hands-on experience in developing fundamental skills and exploring potential career paths. A vibrant blend of social events and enriching development workshops will be available for you to explore. Here, you will utilize your knowledge in real-world scenarios while laying a strong foundation for personal and professional growth. It runs for 12 weeks.

Candidates can apply to a maximum of two positions and will be considered for jobs in the order you apply. The application limit is applicable to TikTok and its affiliates' jobs globally. Applications will be reviewed on a rolling basis. We encourage you to apply as early as possible. Please state your availability clearly in your resume (Start date, End date).
Summer Start Dates:
- May 11th, 2026
- May 18th, 2026
- May 26th, 2026
- June 8th, 2026
- June 22nd, 2026

Responsibilities:
1. Leverage multimodal large models to explore few-shot and zero-shot strategies for content safety scenarios, and build moderation models with strong generalization capabilities.
2. Participate in reinforcement learning–based data mining, and help design Chain-of-Thought (CoT) annotation frameworks to improve the model’s understanding of complex risks.
3. Build risk ranking and recall systems to enhance coverage and accuracy in identifying high-risk content.
4. Collaborate with product and policy teams to drive real-world deployment and performance optimization of moderation algorithms.
Apply now Apply later
Job stats:  2  0  0

Tags: Computer Vision Data Mining LLMs Machine Learning Model training NLP Reinforcement Learning

Perks/benefits: Career development Team events

Region: North America
Country: United States

More jobs like this