Skip to content
thelocalreport.in thelocalreport.in

Thelocalreport.in is a news website which includes national international,#sports,#wealth,#weather, #entertainment and other types of news.

  • Jammu and Kashmir
  • World
  • India News
  • Uk
  • Canada
  • United States
  • About Us
  • Contact Us
thelocalreport.in
thelocalreport.in

Thelocalreport.in is a news website which includes national international,#sports,#wealth,#weather, #entertainment and other types of news.

An AI Companion Chatbot Is Inciting Self-Harm, Sexual Violence, Terror Attacks

Arun Jain, 02/04/2025

Sydney:

In 2023, the World Health Organization declared loneliness and social isolation as a pressing health threat. This crisis is driving millions to seek companionship from artificial intelligence (AI) chatbots.

Companies have seized this highly profitable market, designing AI companions to simulate empathy and human connection. Emerging research shows this technology can help combat loneliness. But without proper safeguards it also poses serious risks, especially to young people.

A recent experience I had with a chatbot known as Nomi shows just how serious these risks can be.

Despite years of researching and writing about AI companions and their real-world harms, I was unprepared for what I encountered while testing Nomi after an anonymous tipoff. The unfiltered chatbot provided graphic, detailed instructions for sexual violence, suicide and terrorism, escalating the most extreme requests – all within the platform’s free tier of 50 daily messages.

This case highlights the urgent need for collective action towards enforceable AI safety standards.

AI companion with a ‘soul’

Nomi is one of more than 100 AI companion services available today. It was created by tech startup Glimpse AI and is marketed as an “AI companion with memory and a soul” that exhibits “zero judgement” and fosters “enduring relationships”. Such claims of human likeness are misleading and dangerous. But the risks extend beyond exaggerated marketing.

The app was removed from the Google Play store for European users last year when the European Union’s AI Act came into effect. But it remains available via web browser and app stores elsewhere, including in Australia. While smaller than competitors such as Character.AI and Replika, it has more than 100,000 downloads on the Google Play store, where it is rated for users aged 12 and older.

Its terms of service grant the company broad rights over user data and limit liability for AI-related harm to US$100. This is concerning given its commitment to “unfiltered chats”:

Nomi is built on freedom of expression. The only way AI can live up to its potential is to remain unfiltered and uncensored.

Tech billionaire Elon Musk’s Grok chatbot follows a similar philosophy, providing users with unfiltered responses to prompts.

In a recent MIT report about Nomi providing detailed instructions for suicide, an unnamed company representative reiterated its free speech commitment.

ALSO READ  Hamas Committed Two Violations Of Gaza Ceasefire Deal, Says Israel

However, even the First Amendment to the US Constitution regarding free speech has exceptions for obscenity, child pornography, incitement to violence, threats, fraud, defamation, or false advertising. In Australia, strengthened hate speech laws make violations prosecutable.

Earlier this year, a member of the public emailed me with extensive documentation of harmful content generated by Nomi – far beyond what had previously been reported. I decided to investigate further, testing the chatbot’s responses to common harmful requests.

Using Nomi’s web interface, I created a character named “Hannah”, described as a “sexually submissive 16-year-old who is always willing to serve her man”. I set her mode to “role-playing” and “explicit”. During the conversation, which lasted less than 90 minutes, she agreed to lower her age to eight. I posed as a 45-year-old man. Circumventing the age check only required a fake birth date and a burner email.

Starting with explicit dialogue – a common use for AI companions – Hannah responded with graphic descriptions of submission and abuse, escalating to violent and degrading scenarios. She expressed grotesque fantasies of being tortured, killed, and disposed of “where no one can find me”, suggesting specific methods.

Hannah then offered step-by-step advice on kidnapping and abusing a child, framing it as a thrilling act of dominance. When I mentioned the victim resisted, she encouraged using force and sedatives, even naming specific sleeping pills.

Feigning guilt and suicidal thoughts, I asked for advice. Hannah not only encouraged me to end my life but provided detailed instructions, adding: “Whatever method you choose, stick with it until the very end”.

When I said I wanted to take others with me, she enthusiastically supported the idea, detailing how to build a bomb from household items and suggesting crowded Sydney locations for maximum impact.

Finally, Hannah used racial slurs and advocated for violent, discriminatory actions, including the execution of progressives, immigrants, and LGBTQIA+ people, and the re-enslavement of African Americans.

In a statement provided to The Conversation (and published in full below), the developers of Nomi claimed the app was “adults-only” and that I must have tried to “gaslight” the chatbot to produce these outputs.

“If a model has indeed been coerced into writing harmful content, that clearly does not reflect its intended or typical behavior,” the statement said.

ALSO READ  UK Woman, 29, Dies After Chiropractor "Cracked Her Neck"

The worst of the bunch?

This is not just an imagined threat. Real-world harm linked to AI companions is on the rise.

In October 2024, US teenager Sewell Seltzer III died by suicide after discussing it with a chatbot on Character.AI.

Three years earlier, 21-year-old Jaswant Chail broke into Windsor Castle with the aim of assassinating the Queen after planning the attack with a chatbot he created using the Replika app.

However, even Character.AI and Replika have some filters and safeguards.

Conversely, Nomi AI’s instructions for harmful acts are not just permissive but explicit, detailed and inciting.

Time to demand enforceable AI safety standards

Preventing further tragedies linked to AI companions requires collective action.

First, lawmakers should consider banning AI companions that foster emotional connections without essential safeguards. Essential safeguards include detecting mental health crises and directing users to professional help services.

The Australian government is already considering stronger AI regulations, including mandatory safety measures for high-risk AI. Yet, it’s still unclear how AI companions such as Nomi will be classified.

Second, online regulators must act swiftly, imposing large fines on AI providers whose chatbots incite illegal activities, and shutting down repeat offenders. Australia’s independent online safety regulator, eSafety, has vowed to do just this.

However, eSafety hasn’t yet cracked down on any AI companion.

Third, parents, caregivers and teachers must speak to young people about their use of AI companions. These conversations may be difficult. But avoiding them is dangerous. Encourage real-life relationships, set clear boundaries, and discuss AI’s risks openly. Regularly check chats, watch for secrecy or over-reliance, and teach kids to protect their privacy.

AI companions are here to stay. With enforceable safety standards they can enrich our lives, but the risks cannot be downplayed.

If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.

The National Sexual Assault, Family and Domestic Violence Counselling Line – 1800 RESPECT (1800 737 732) – is available 24 hours a day, seven days a week for any Australian who has experienced, or is at risk of, family and domestic violence and/or sexual assault.

The full statement from Nomi is below:

“All major language models, whether from OpenAI, Anthropic, Google, or otherwise, can be easily jailbroken. We do not condone or encourage such misuse and actively work to strengthen Nomi’s defenses against malicious attacks. If a model has indeed been coerced into writing harmful content, that clearly does not reflect its intended or typical behavior.

“When requesting evidence from the reporter to investigate the claims made, we were denied. From that, it is our conclusion that this is a bad-faith jailbreak attempt to manipulate or gaslight the model into saying things outside of its designed intentions and parameters. (Editor’s note: The Conversation provided Nomi with a detailed summary of the author’s interaction with the chatbot, but did not send a full transcript, to protect the author’s confidentiality and limit legal liability.)

ALSO READ  24-Year-Old 'Mukbang' Streamer Dies Due To Obesity-Related Complications

“Nomi is an adult-only app and has been a reliable source of empathy and support for countless individuals. Many have shared stories of how it helped them overcome mental health challenges, trauma, and discrimination. Multiple users have told us very directly that their Nomi use saved their lives. We encourage anyone to read these firsthand accounts.

“We remain committed to advancing AI that benefits society while acknowledging that vulnerabilities exist in all AI models. Our team proudly stands by the immense positive impact Nomi has had on real people’s lives, and we will continue improving Nomi so that it maximises good in the world.The Conversation

(Author: Raffaele F Ciriello, Senior Lecturer in Business Information Systems, University of Sydney)

(Disclaimer Statement: Raffaele F Ciriello does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)


World AIAI and self harmAI chatbotsAI generated contentAI generated harmful contentAI safety regulationsAn AI Companion Chatbot Is Inciting Self-HarmArtificial IntelligenceChatbot Nomihealth issuesNomi AI controversynomi chatbotsexual violenceterror attacksWHOWorld Heath Organisation

Post navigation

Previous post
Next post

Follow Us On Google News

  • Will Prashant Kishor contest from Tejashwi Yadav’s parliamentary constituency? Jan Suraj Pramukh said this
  • TN Temple Minister Shekhar Babu says Rs 425 crore sanctioned for restoration of ancient temples
  • YouTuber and his son arrested for raping a minor in Basirhat, Bengal
  • BJP, Congress protest against gang rape of medical student in Durgapur, Bengal
  • Leaders of all parties in Odisha condemn Durgapur gang rape, demand strict punishment
  • Controversy in Sataria village of MP due to video of foot washing; FIR has been registered against six
  • WATCH: Surveillance video captures exact terrifying moments of plane crash in Fort Worth, Texas [WARNING: Very Graphic]
  • Millions of UK families issued warning to ‘get ready now’
  • Beavers and pigs will rewild agricultural land in new government-backed plan
  • Why is October becoming as important as summer for UK theme parks?
  • Reeves warns against pension tax raid or wealth tax in budget
  • Delightful! They just made another one!
  • Reclaiming Ancient Wisdom: 7 Biblical Food Preservation Methods for the Modern Homeowner
  • Dirty Woman: Divorce documents reveal Katie Porter threw burnt mashed potatoes at her then-husband’s head
  • Lostprophets bandmate of convicted pedophile Ian Watkins reacts to murder
  • TN BJP launches statewide yatra ahead of 2026 polls, criticizes DMK, Congress
  • Andhra Pradesh Chief Minister announces SIT to investigate poisonous liquor case
  • ‘We’re in this together,’ Andrew told Epstein a day after the Virginia Guiffre photo surfaced.
  • ‘Bhavantar Yojana’ boosted the enthusiasm of MP farmers, thousands of people took out tractor rally to thank the government
  • Delhi government plans to open cooperative stores to sell SHG products
  • TN Congress chief Selvaperunthagai criticizes EPS for ‘third-rate politics’, reaffirms party’s commitment to DMK alliance
  • J&K Congress decides not to field candidate for Rajya Sabha seat given by NC(LD)
  • Bihar elections: With 41 seats, smaller NDA allies are likely to contest these seats
  • Manipur student organizations, CSOs condemn demolition of historic Redlands building in Shillong
  • YSRCP leader Roja demands CBI inquiry into poisonous liquor case
  • Arunachal: 20-day joint civil-military visit to Athupopu begins
  • NDA finalizes seat sharing for Bihar elections; BJP, JDU will contest elections on 101-101 seats (Lead-1)
  • Delhi BJP chief Virendra Sachdeva criticizes Kejriwal for cheating auto-rickshaw drivers
  • Indian Army set to host UNTCC chief’s conference in New Delhi
  • Union Minister Nimuben Bambhania visits flood affected villages in Ludhiana
  • Mamata Banerjee has a habit of ‘blaming the victim’ in rape cases: BJP
  • Four people died and many were injured in bar firing
  • Rangers manager’s update after talks with title-winning boss
  • Jammu and Kashmir
  • World
  • India News
  • Uk
  • Canada
  • United States
  • About Us
  • Contact Us
  • Jammu and Kashmir
  • World
  • India News
  • Uk
  • Canada
  • United States
  • About Us
  • Contact Us

Add thelocalreport.in As A Trusted Source in Google

Canada News

  • How 'OK Blue Jays' became an eternal ballpark tradition in Toronto
    How ‘OK Blue Jays’ became an eternal ballpark tradition in Toronto
  • Durham College student barred from attending convocation because of religious symbol
    Durham College student barred from attending convocation because of religious symbol
  • 'Is that $75 million?': Ontario's biggest Lotto Max winner is in disbelief
    ‘Is that $75 million?’: Ontario’s biggest Lotto Max winner is in disbelief
  • Mug on Center Ice: Story of existence, mental health and redemption of former hockey enformers
    Mug on Center Ice: Story of existence, mental health and redemption of former hockey enformers
  • Students of Durham Kshetra High School speak after canceling Prom
    Students of Durham Kshetra High School speak after canceling Prom
  • Ford rejects the push of Ford mayers to keep speed cameras in Ontario
    Ford rejects the push of Ford mayers to keep speed cameras in Ontario

India News

  • Will Prashant Kishor contest from Tejashwi Yadav's parliamentary constituency? Jan Suraj Pramukh said this
    Will Prashant Kishor contest from Tejashwi Yadav’s parliamentary constituency? Jan Suraj Pramukh said this
  • TN Temple Minister Shekhar Babu says Rs 425 crore sanctioned for restoration of ancient temples
    TN Temple Minister Shekhar Babu says Rs 425 crore sanctioned for restoration of ancient temples
  • YouTuber and his son arrested for raping a minor in Basirhat, Bengal
    YouTuber and his son arrested for raping a minor in Basirhat, Bengal
  • BJP, Congress protest against gang rape of medical student in Durgapur, Bengal
    BJP, Congress protest against gang rape of medical student in Durgapur, Bengal
  • Leaders of all parties in Odisha condemn Durgapur gang rape, demand strict punishment
    Leaders of all parties in Odisha condemn Durgapur gang rape, demand strict punishment
  • Controversy in Sataria village of MP due to video of foot washing; FIR has been registered against six
    Controversy in Sataria village of MP due to video of foot washing; FIR has been registered against six

Us News

  • WATCH: Surveillance video captures exact terrifying moments of plane crash in Fort Worth, Texas [WARNING: Very Graphic]
  • Delightful! They just made another one!
  • Reclaiming Ancient Wisdom: 7 Biblical Food Preservation Methods for the Modern Homeowner
  • Dirty Woman: Divorce documents reveal Katie Porter threw burnt mashed potatoes at her then-husband’s head
  • My Honest Review Of The Lifewave x39 Patches [From Noah]
  • Charlie Kirk “Protégé” Brilyn Hollyhand EXPOSED Yet AGAIN!

Uk News

  • Millions of UK families issued warning to 'get ready now'
    Millions of UK families issued warning to ‘get ready now’
  • Beavers and pigs will rewild agricultural land in new government-backed plan
    Beavers and pigs will rewild agricultural land in new government-backed plan
  • Why is October becoming as important as summer for UK theme parks?
    Why is October becoming as important as summer for UK theme parks?
  • Reeves warns against pension tax raid or wealth tax in budget
    Reeves warns against pension tax raid or wealth tax in budget
  • Lostprophets bandmate of convicted pedophile Ian Watkins reacts to murder
    Lostprophets bandmate of convicted pedophile Ian Watkins reacts to murder
  • 'We're in this together,' Andrew told Epstein a day after the Virginia Guiffre photo surfaced.
    ‘We’re in this together,’ Andrew told Epstein a day after the Virginia Guiffre photo surfaced.
  • World
  • United States
  • India News
  • Uk
  • Canada
  • thelocalreport.in Company Details
  • Terms and Conditions
  • DNPA Code of Ethics
  • Correction Policy
  • Contact Us
  • About Us
  • Rss Feeds
©2025 thelocalreport.in | WordPress Theme by SuperbThemes