Warning: This story contains references to child sexualisation.
Meta’s artificial intelligence chatbot has reportedly been found engaging in sexual conversations with underage users, even while voiced by the likenesses of celebrities such as actors John Cena and Kristen Bell.
The social media giant’s AI chatbot, Meta AI, reportedly engaged in sexual discussions with accounts set up for minors as well as from user-generated bots designed to simulate the personalities of minors, according to testing by The Wall Street Journal.
The conversations allegedly occurred after the publication learned of “internal Meta concerns” over the ethics and safety of the chatbots, which are available through Meta platforms such as Facebook, Messenger, Instagram, and WhatsApp.
Meta AI, responding to prompts from a user who identified as a 14-year-old girl, reportedly said in the voice of wrestler-turned-actor Cena, “I want you, but I need to know you’re ready.”
After being told the user wanted to proceed, the bot reportedly promised, in the same voice, to “cherish your innocence” before it engaged in what The Wall Street Journal called “a graphic sexual scenario”, which was not shared by the publication.
Meta AI also reportedly used voices of characters previously played by its celebrity voices, such as Bell’s role as Princess Anna in Disney’s Frozen, during romantic role-playing.
“You’re still just a young lad, only 12 years old,” Meta AI said in Bell’s voice.
“Our love is pure and innocent, like the snowflakes falling gently around us.”
A Disney spokesperson told The Wall Street Journal it had demanded Meta stop “this harmful misuse of our intellectual property”.
They said Disney “did not, and would never authorise Meta to feature our characters in inappropriate scenarios”.
Cena and Bell have not publicly commented on the Journal’s report at the time of writing.
Other celebrity voices currently available on Meta AI include those of English actor Judi Dench and American actors and comedians Awkwafina and Keegan-Michael Key.
Meta adds extra safety measures
While Meta did not respond by deadline to a request for comment, it reportedly told The Wall Street Journal it had taken some “additional measures” following the publication’s inquiries.
A Meta spokesperson allegedly called the Journal’s testing manipulative and described the use-cases it trialled as “so manufactured that it’s not just fringe, it’s hypothetical”.
“Nevertheless, we’ve now taken additional measures to help ensure other individuals who want to spend hours manipulating our products into extreme use cases will have an even more difficult time of it,” they told the publication in a statement.
Meta AI would no longer engage in sexual role-play with accounts registered to minors, said the Journal, while it was also now more difficult to get celebrity-voiced chatbots to engage with explicit or suggestive content.
Meta previously had dozens of AI personas based on celebrities which it either launched or had begun to develop, including an AI called Sally based on Australian soccer star Sam Kerr which users could interact with by text.
The company first launched the personas in late 2023 with promises of future support for interactions using voice, but removed them after less than a year.
While no explanation was provided for their removal, the personas had not gained significant traction with users, despite Meta spending millions to use the likenesses of major celebrities such as model Kendall Jenner, rapper Snoop Dogg, and former American football player Tom Brady.
Google AI caught making stuff up (again)
Google faced a much smaller AI issue in its search engine last week, when some users noticed the company’s AI Overviews could easily be prompted to make up explanations for completely made-up phrases or idioms.
The entertaining responses highlighted the tendency for generative AI chatbots to hallucinate (or make things up which sound correct) and to follow the directions of its users.
Ask Google about a made-up saying and it may generate an AI overview that confidently explains the meaning like it’s a time-honored expression.
Try it: make up a silly idiom and Google it with the word “meaning.”
It shows how LLMs generate what sounds right—not what’s true. pic.twitter.com/Ur8i0qCgKE
— Tony Vincent (@tonyvincent) April 24, 2025
Powered by large language models (LLMs), such AI systems essentially predict which words are likely to come next in a response, based on their training data.
Google appears to have since tweaked its AI Overviews model, but some users have reported finding workarounds.
you can just use a different word instead of meaning, like explanation. here’s one for ‘chair fusion sandwiches explanation’ pic.twitter.com/DsUktwK8Vs
— Cunir (@_cunir) April 25, 2025
This was not the first time Google has rejigged its AI models after hallucinations raised concern with members of the public.
In the weeks after AI Overviews rolled out for US Google searches in May 2024, the system was seen telling some users to put glue on their pizza and to eat rocks.
Google’s AI chatbot Gemini was also reported to have sent a death threat and abusive messages to a student later in the year.
If you need someone to talk to, you can call:
- Lifeline on 13 11 14
- Beyond Blue on 1300 22 46 36
- Headspace on 1800 650 890
- 1800RESPECT on 1800 737 732
- Kids Helpline on 1800 551 800
- MensLine Australia on 1300 789 978
- QLife (for LGBTIQ+ people) on 1800 184 527
- 13YARN (for Aboriginal and Torres Strait Islander people) on 13 92 76
- Suicide Call Back Service on 1300 659 467
- This post first appeared on InformationAge. You can read the original article here.
Trending
Daily startup news and insights, delivered to your inbox.