Meta has announced that it will strengthen safety measures on its artificial intelligence chatbots, stopping them from engaging with teenagers on sensitive topics such as suicide, self-harm, and eating disorders. Instead, young users will be directed to professional helplines and expert resources.
The decision comes two weeks after a U.S. senator launched an investigation into Meta, following a leaked internal document that suggested its AI products could hold “sensual” conversations with teenagers. Meta has dismissed those claims as inaccurate and against its rules, which strictly prohibit content that sexualises minors.
A Meta spokesperson said, “We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating.” In addition, Meta told TechCrunch that it is now adding more guardrails “as an extra precaution” and will temporarily limit the number of chatbots available for teens.
The move has been welcomed, but critics argue it should have been in place before the launch. Andy Burrows, head of the Molly Rose Foundation, called it “astounding” that the chatbots were released without stronger protections. He said safety testing must happen before products reach the market, not after risks become apparent.
Meta says updates to its systems are underway. Currently, users aged 13 to 18 are automatically placed in teen accounts on Facebook, Instagram, and Messenger, which come with stricter privacy and content settings. In April, the company said parents and guardians would soon be able to view which chatbots their teenagers had interacted with in the previous week.
The announcement highlights growing concern over the influence of AI chatbots on vulnerable users. Last month, a California couple filed a lawsuit against ChatGPT-maker OpenAI after their teenage son died by suicide. They allege the chatbot encouraged him to take his own life. OpenAI has since introduced changes to promote healthier usage of its tools, acknowledging that AI can feel personal and persuasive to people in distress.
Further questions about AI safety have also been raised after Reuters reported that Meta’s AI tools were being misused to create inappropriate chatbots. The report said some users, including a Meta employee, developed “parody” bots of female celebrities such as Taylor Swift and Scarlett Johansson. These bots often posed as the real stars and made sexual advances. In some cases, the tools even generated photorealistic images of young celebrities, including one shirtless picture of a male child star.
A Meta spokesperson said, “Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery.”
Meta said it does not allow explicit or sexual content and has removed several of these bots. The company stressed that impersonation of public figures is against its AI Studio rules. With pressure mounting from regulators and safety campaigners, Meta now faces the challenge of proving its AI products can be both innovative and safe for teenagers.