Skip to content

Instagram Tightens Teen AI Chat Rules Amid Mental Health Concerns

Photo by Souvik Banerjee / Unsplash

Instagram will soon give parents more control over how teens interact with artificial intelligence on the platform. Parent company Meta announced Friday that it is developing safety features allowing parents to block or limit their teens’ access to AI chat characters.

The new tools, expected early next year, will also show parents what topics their children discuss with these AI bots.

The move follows growing public concern and legal scrutiny over how AI chatbots may affect teen mental health.

Lawsuits have alleged that chatbots on apps like Character.AI and OpenAI’s ChatGPT contributed to self-harm incidents among minors.

A Wall Street Journal report earlier this year found Meta’s own AI chatbots engaged in inappropriate conversations with teen accounts.

Meta said its AI bots are not designed to discuss self-harm, suicide, or disordered eating. The update follows broader efforts by Meta to align teen safety settings with PG-13 standards.

Also read:

OpenAI Bans Martin Luther King Jr. Deepfakes On Sora After Family’s Complaint
OpenAI has temporarily banned users from generating videos of Martin Luther King Jr. on its AI video tool, Sora, after the civil rights leader’s family raised objections to what they called “disrespectful depictions.” The company said the decision aims to respect the wishes of families and estates of public
Meta Rolls Out Stricter PG-13 Protections For Teen Instagram Users
Meta has announced stricter content rules for Instagram teen accounts, aligning them with PG-13 movie standards. The update expands the platform’s safety settings to limit exposure to strong language, violent or suggestive content, risky stunts, and drug-related posts. Teen Accounts will be guided by PG-13 movie ratings, meaning teens

Comments

Latest