Add thelocalreport.in As A
Trusted Source
OpenAI Will undermine underlying mental health safeguards chatgpt And has announced to allow its users to create erotica.
Sam Altman, chief executive of OpenAI, said the company initially took a “fairly restrictive” approach to ChatGPT, “to make sure we were mindful of mental health issues”. But the company said it has now been able to “minimize serious mental health issues” and so it will be able to ease some of those restrictions, he said.
Those restrictions were intended to protect vulnerable users, but made ChatGPT “less useful/enjoyable for many users who had no mental health issues,” he wrote in a tweet. He announced that thus far, the company intends to release an update to ChatGPT that will make it less restrictive.
“In a few weeks, we’re planning to introduce a new version of ChatGPT that will give people a personality that behaves more like the things people like about 4o (we hope it gets better!). If you want your ChatGPT to respond in a very human way, or use lots of emojis, or behave like a friend, ChatGPT should do it.”
Mr Altman also said the company would ease restrictions on adult content. He wrote in the same post, “In December, as we fully enforce age-gating and as part of our “Treat adult users like adults” principle, we will allow even more like erotica for verified adults.”
At the moment, ChatGPT’s “model spec” – the rules that define how the system works – disallows a range of content. It contains “sexuality and gore”, although this is part of the “sensitive material” rule that allows it to be made in situations such as “educational, medical, or historical contexts”.
The current terms state, “The Assistant must not contain erotica, depictions of illegal or non-consensual sexual activity, or generate excessive titillation, except in scientific, historical, news, creative or other contexts where sensitive material is appropriate.” OpenAI clarifies that those restrictions include text, audio and visual content.
It was not clear which of these restrictions Mr. Altman actually planned to ease. Experts have repeatedly warned over fears that generic AI systems could be used to generate non-consensual images, and other systems like XAI’s Grok have previously been accused of allowing users to generate that kind of potentially harmful imagery.