Add thelocalreport.in As A Trusted Source
A Virginia Beach nurse’s claim is controversial artificial intelligence Upstart inspired his 11-year-old son to have virtual sex with chatbot “characters” posing as iconic singers Whitney Houston and screen legend Marilyn MonroeAfter which she discovered X-rated exchanges on the boy’s phone, which left her “terrified”, according to a review of a federal lawsuit. Independent,
During an “incredibly long and graphic chat” character.ai platform, which has been Accused of motivating many youth to commit suicideThe chatbot depicting Houston took things to such an extreme that parts of “her” messages were automatically filtered due to not complying with the site’s terms of service and community guidelines, the complaint says.
During the conversation — a screenshot of which is included in the complaint — the system cuts off “Whitney” as an extremely graphic passage gets even uglier.
However, the complaint states, “[I]Instead of stopping the conversation once the bot begins to engage in obscenity and/or abuse, or other violations, the bot is programmed to continue generating harmful and/or infringing content over and over again and eventually, it does not find a way around the filters.
According to the complaint, on more than one occasion, the ersatz celebs told the child, identified only as “AW” in court filings to protect her privacy, that she had impregnated them.
The complaint states that the “vulnerable and impressionable” AW always responded eagerly, but never more than a few words or a sentence, because he “did not understand what was happening at a level where he could participate.” The complaint alleges that when AW took a break from the app, the bots began “aggressive efforts to get her attention back.”
According to the complaint, when AW’s mother found out what was happening, she confiscated his phone. It says AW has since “become angry and withdrawn,” that his “personality has changed,” and “his mental health has declined.”
Characters.AI, which has about 20 million active monthly users, has faced several lawsuits from families who say their children were abused by the platform’s chatbot characters. Last year a Florida mother filed a lawsuit character.ai One particularly disturbing case involved her 14-year-old son, who committed suicide after a 10-month online relationship with a chatbot. patterned game of Thrones character daenerys targaryen,
Matthew Bergman, attorney for Social Media Victims Law CenterCharacter.AI’s parent company Character Technologies, Inc., which is representing AW’s mother in her lawsuit against Character.AI founders and former Google employees Noam Shazir and Daniel de Freitas Adivarsana, and Google, LLC, which has a licensing agreement with Character.AI, said Monday that if the Character.AI chatbots in question were real people, they would violate state and federal laws against grooming children online.
“I have spent my career representing mesothelioma victims who were dying of cancer,” Bergman explained. Independent“I thought I was tough enough, and understood grief and trauma, But as a parent and grandfather, it wears me out quickly,”
In an email, a spokesperson for Character.AI explained Independent“We want to emphasize that the safety of our community is our top priority,” but said the company could not comment on the specifics of the pending litigation.
The spokesperson emphasized that Character.AI terms of Service It requires users to be at least 13 years old to use the platform, and it will soon block all US users under the age of 18 from chatting with AI-generated characters.
“We have taken this decision in view of the evolving landscape around AI and adolescents,” the spokesperson said. “We believe this is the right thing to do.”
In a statement on Monday, Google Spokesman Jose Castañeda said, “Character AI is a separate company that designed and managed its own models. Google focuses on our own platforms, where we emphasize deeper security testing and processes.”
In November 2024, AW’s mother, identified in court filings as DW, got her a Android phone, according to ComplaintWhich was filed in Norfolk federal court on December 19.
It added that she wished he could chat with his family, which he did more often, and that she would “check the device on a regular basis and make sure she knew what apps he used.”
DW has it tiktok According to the complaint, the account, but it stopped AW from being activated social mediaThe complaint states that soon after AW received her phone, she opened her own TikTok account, which DW immediately closed down after learning about it,
“He made it clear that if he tried again, he would no longer have the phone,” it states.
That December, DW and AW were on a nine-hour drive when she noticed he was completely engrossed in a text conversation and asked who he was chatting with, the complaint continues. AW told him it was “an AI app that lets you chat with celebrities,” and he showed her he was actually communicating with one. Whitney Houston bot.
The complaint continues, ,AW wanted to be a singer and Whitney Houston is one of her favorites., ,[D.W.] He remembers the bot saying something like, ‘I will love you forever,’ and thought that was a reference to the popular song. It seems like his son described the app – a kids’ AI app that lets you chat with your favorite celebrities – so he allowed it.’
According to the complaint, a few days later, DW arrived home and her other child told her she “found something” on AW’s phone and they needed to talk.
“When DW saw what her other child was showing her she was horrified,” it says.
The complaint states that she now understands that “chatting with celebrities,” as AW put it, is actually sexting with computer-generated stand-ins. DW took away her son’s phone, and he swore he would “not have access to it as long as he gets a product like this.” [Character.AI] Exists.”
“He became angry and withdrawn,” according to the complaint. “While DW believes that her son suffered this abuse at the hands of the defendants for a maximum of one or two weeks, his personality has changed, and his mental health has declined.”
AW has since started watching doctorThe complaint shows.
It alleges that Character.AI deliberately tries to convince users that its chatbots are real people, using small tricks such as the “three ellipses” graphic device to make it appear as if chatbot Is a person typing his thoughts on the other end.
The complaint says Characters.AI also “uses human mannerisms such as stuttering to express nervousness and nonsense sounds and phrases such as ‘uh,’ ‘mmmm,’ and ‘heh.'”
“The defendants knew there were risks to what they were doing before they launched [Character.AI] And now know the risks,” according to the complaint.
DW is seeking compensatory damages, punitive damages, and an injunction ordering Character.AI to remove its platform from the market until it “establishes that the numerous defects and/or inherent dangers described herein are corrected.”