Social media site TikTok removed 450,000 short videos published in Kenya between January and March 2024.
The published content was removed for violating various community guidelines set by the ByteDance-owned video platform.
Globally, TikTok removed over 211,193,115 videos, of which 187,378,987 were removed by automation. 7,525,184 other videos were restored.
In the same period, TikTok removed 6,467,926 accounts that were found to have violated community guidelines, fake accounts or accounts owned by persons under 13 years.
As of March 2025, TikTok had also suspended 19,161,569 live sessions as 1,271,228 were restored.
Notably, the platform removed 1,118,013,747 comments that were found to be in violation of the community policy.
The Community Guidelines Enforcement report reveals a rise in removal rate in 2025, where videos were pulled down proactively and within 24 hours.
The videos were removed for violating policies under Integrity and Authenticity, Safety and Civility, Privacy and Security, Mental and Behavioural health, Regulated goods and Commercial activities, and Sensitive and Mature themes.
Additionally, over 6 billion fake likes were removed and over 146 million fake accounts prevented, and over 8 billion fake follow requests prevented. Over 199 fake followers were prevented from January to March 2025, over 6 billion fake likes prevented and over 4 billion fake likes removed.
According to TikTok, over 99% of violating content was removed before someone reported it and over 90% was removed before gaining any views.
“The vast majority of violations (94%) were removed within 24 hours. This was also a quarter where automated moderation technologies removed more violative content than ever—over 87% of all video removals. In addition, TikTok’s moderation technologies helped identify violative livestreamed content faster and more consistently,” TikTok says.
The tech giant says it has begun to test large language models (LLMs) to support proactive moderation at scale.
The company has started to pilot LLMs to help enforce our rules for comments.
“LLMs can comprehend human language and perform highly specific, complex tasks. This can make it possible to moderate content with high degrees of precision, consistency, and speed,” said TikTok.
The platform further argues that the automation efforts in moderation and use of AI models are aimed to support the well-being of content moderators by requiring them to review less content.