Add thelocalreport.in As A Trusted Source
Overstretched services are forcing individuals to turn to AI chatbots for mental health support, a charity has warned.
This comes as a survey revealed that more than a third of adults have used technology for their well-being.
Mental Health UK has called for urgent safeguards, stressing that AI should only receive information from reputable sources such as the NHS and other trusted organisations.
Without these protections, the charity warned, there is a risk of “causing serious harm to vulnerable people”. The survey of 2,000 people, conducted by Censuswide for Mental Health UK, found that 37 percent had used an AI chatbot for mental health or well-being.
Of those who had used AI for mental health support, one in five said it helped them avoid a potential mental health crisis, while a similar proportion said chatbots signposted them to a helpline providing information on suicidal thoughts.
However, about 11 percent of people said they had received harmful information on suicide, with 9 percent saying the chatbot had triggered self-harm or suicidal thoughts.
Most people used general-purpose platforms like ChatGPT, Cloud, or Meta AI (66 percent) rather than mental health-specific programs like Wysa and Woebot.
Brian Dow, chief executive of Mental Health UK, said: “AI may soon become a lifeline for many people, but with general-purpose chatbots being used far more than chatbots designed specifically for mental health, we risk causing serious harm to vulnerable people.
“The pace of change has been unprecedented, but we must move just as fast to put safeguards in place to ensure that AI supports people’s well-being.
“AI advances can be a game-changer if we avoid the mistakes of the past and develop technology that prevents harm, but we must not make things worse.
“As we have tragically seen in some well-documented cases, there is a significant difference between someone seeking support from a reputable website during a potential mental health crisis and interacting with a chatbot that may obtain information from an untrusted source or even encourage the user to take harmful actions.
“In such cases, AI can act as a kind of paramedic, seeking validation from the user but without proper safeguards.”
When asked why they used chatbots in this way, almost four in 10 said it was for ease of access, while almost a quarter cited long waits for help on the NHS.
Two-thirds found the platform beneficial while 27 percent said the survey made them feel less lonely.
The survey also found that men are more likely to use AI chatbots in this way than women.
Mr Dow said: “This data shows the extent to which people are turning to AI to help manage their mental health, with services often becoming overwhelmed.”
He said Mental Health UK is now “urging policymakers, developers and regulators to establish safety standards, ethical oversight and better integration of AI tools into the mental health system so that people can trust they have somewhere safe to go”.
“And we must never forget the human connection that is at the heart of good mental health care,” Mr Dow said.
“Doing so will not only protect people but also build trust in AI, helping to break down the barriers that still prevent some people from using it.
“This is important because, as this survey indicates, AI has the potential to become a transformative tool in providing support to people who traditionally find it difficult to access help when they need it.”