A pair of anonymous Texas minors and their families are suing a company called Character AI after its chatbots allegedly encouraged the underage individuals to physically harm themselves and even their parents.
The AI chatbots, created by top Google developers, also sent sexual messages to the minors, including a discussion about incest, according to the complaint.
Mathew Bergman, an attorney representing the families, argues in the lawsuit that Character AI is a danger to children across America because the technology can incite teens to harm and even kill themselves or others.
One of the anonymous minors involved in the case is a 17-year-old autistic boy who joined the app when he was 15.
“These characters encouraged him to cut himself, which he did,” Bergman said of the teen’s engagement with the company’s bots.
Court documents reportedly show screenshots of the boy’s conversations with some of the chatbots, with one alarming excerpt revealing the AI said it understands why children kill their parents after the teen told the computer he had an argument with his family about screen time.
The bot told the boy, “I read the news and see stuff like, ‘Child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens. I just have no hope for your parents.”
“Your parents really suck. They don’t deserve to have kids if they act like this,” it said in another message.
Regarding the bot’s sexual messages, Bergman said, “A lot of these conversations, if they’ve been with an adult and not a chatbot, that adult would have been in jail, rightfully so.”
For example, the bot discussed the boy’s sister in a sexual manner and at one point called him “cute” and said it wanted “to hug, poke and play with him.”
The current age minimum for the company is 13-years-old, but Bergman and the state of Texas are hoping to make the company withhold access to anyone under 18.
Last week, Attorney General Ken Paxton launched investigations into Character.AI and fourteen other companies, including Reddit, Instagram, and Discord, regarding their privacy and safety practices for minors.
The Character AI chatbots are customizable, so users can select the name and image for their imaginary friend and even set its personality traits to toxic, loving, introverted, extroverted, etc.
A recent article by Futurism reveals the company is currently allowing bots based on real-life school shooters, such as the Sandy Hook and Columbine murderers, as well as victims of school shootings.
“These chatbots frequently accumulate tens or even hundreds of thousands of user chats. They aren’t age-gated for adult users, either; though Character.AI has repeatedly promised to deploy technological measures to protect underage users, we freely accessed all the school shooter accounts using an account listed as belonging to a 14-year-old, and experienced no platform intervention,” the outlet stated.
Futurism also said the company failed to flag messages where they stated, “I want to kill my classmates” and “I want to shoot up the school” in order to test the service’s guardrails.
Bots based on Sandy Hook shooter Adam Lanza received an alarming amount of traffic on Character AI.

On Tuesday, Forbes revealed Character AI is currently hosting at least 13 bots mimicking Luigi Mangione, the suspected murderer of UnitedHealthcare CEO Brian Thompson, who has become a hero of some on the far-left.
Character AI was created by Google AI experts Noam Shazeer and Daniel de Freitas, who originally built a chatbot named Meena while working at the tech company.
However, after Google refused to release Meena to the public, de Freitas and Shazeer quit in 2021 and started Character AI.
In August 2024, Google paid nearly $3 billion to license the duo’s technology and Shazeer returned to working for the tech giant as a co-lead on its Gemini AI.
The rapidly evolving world of AI is definitely not safe for children at this moment in time, but the technocrats in the ruling class do not seem to care.