Meta is halting teens’ access to artificial intelligence characters, at least temporarily, the company said in a blog post Friday.
Related Articles
Americans split on using AI for personal finances
4 in 5 small businesses had cyberattacks last year and almost half of those were AI powered
An apparent AI-generated Facebook post claimed a reporter died. He doesn’t understand why.
Mother of Elon Musk’s child sues his AI company over sexual deepfake images created by Grok
ChatGPT’s free ride is ending: Here’s what OpenAI plans for advertising on the chatbot
Meta Platforms Inc., which own Instagram and WhatsApp, said that starting in the “coming weeks,” teens will no longer be able to access AI characters “until the updated experience is ready”
This applies to anyone who gave Meta a birthday that makes them a minor, as well as “people who claim to be adults but who we suspect are teens based on our age prediction technology.”
The move comes the week before Meta — along with TikTok and Google’s YouTube — is scheduled to stand trial in Los Angeles over its apps’ harms to children.
Teens will still be able to access Meta’s AI assistant, just not the characters.
Other companies have also banned teens from AI chatbots amid growing concerns about the effects of artificial intelligence conversations on children. Character.AI announced its ban last fall. That company is facing several lawsuits over child safety, including by the mother of a teenager who says the company’s chatbots pushed her teenage son to kill himself.

