Skip to content

Study Finds TikTok Failing To Remove AI Videos Sexualizing Minors

Photo by BoliviaInteligente / Unsplash

A new investigation has found that TikTok is hosting millions of likes on sexually suggestive videos featuring AI-generated images of children, raising fresh concerns about the platform’s ability to enforce its own safety rules.

The report, released by Spanish nonprofit Maldita.es, identified more than a dozen accounts posting AI-created girls in lingerie, school uniforms, or other sexualized poses. Some videos were linked in the comments to Telegram chats advertising child pornography.

Researchers flagged 15 accounts and 60 videos to TikTok, but the platform determined that most did not violate its policies, despite rules explicitly banning AI-generated sexualized depictions of minors.

Several accounts remained active even after appeals. TikTok did later remove a handful of videos, though without explanation.

The findings renew pressure on tech companies to police AI imagery as governments adopt tougher online safety laws. TikTok says it maintains zero tolerance for youth exploitation but did not address the specific cases.

Also read:

Young Workers Want Smarter, More Personal AI At Work, Report Finds
A new Google Workspace–Harris Poll survey shows that young knowledge workers are no longer impressed by AI’s novelty and now expect the technology to deliver more personalized, authentic results. The poll of more than 1,000 U.S. workers ages 22 to 39 found that 92% want AI

Comments

Latest