Instagram will soon give parents more control over how teens interact with artificial intelligence on the platform. Parent company Meta announced Friday that it is developing safety features allowing parents to block or limit their teens’ access to AI chat characters.
The new tools, expected early next year, will also show parents what topics their children discuss with these AI bots.
Meta is adding more parental controls for teen AI use https://t.co/sPlc9WNJOe
— The Verge (@verge) October 17, 2025
The move follows growing public concern and legal scrutiny over how AI chatbots may affect teen mental health.
Lawsuits have alleged that chatbots on apps like Character.AI and OpenAI’s ChatGPT contributed to self-harm incidents among minors.
A Wall Street Journal report earlier this year found Meta’s own AI chatbots engaged in inappropriate conversations with teen accounts.
Meta’s Chief AI Officer and Head of Instagram just shared our approach to supporting parents and protecting teens as they navigate AI.https://t.co/cPBHjyCBAg
— Meta Newsroom (@MetaNewsroom) October 17, 2025
Meta said its AI bots are not designed to discuss self-harm, suicide, or disordered eating. The update follows broader efforts by Meta to align teen safety settings with PG-13 standards.
Also read:



