Add thelocalreport.in As A
Trusted Source
Graphic and violent videos of torture and murder of women are being made Google’s AI generators and shared on the Internet, raising concerns that the technology is fueling misogynistic abuse,
open an account youtubeDozens of such videos were uploaded under the name WomanShotA.I Women were shown pleading for their lives Before it was shot, it had been viewed nearly 200,000 times since June. It was removed only after tech reporting site 404 Media alerted the platform.
Videos created using Google’s The AI-generator VO3 was inspired to be created by humans and then shared on the Internet.
Some of the videos were titled “Caught girls shot in the head”, “Japanese schoolgirls shot in the breast”, “Tragic end of female reporter”.
Durham University law professor Claire McGlynn, a leading expert on violence against women and girls and gender equality, said when she saw the video channel: “It lit a flame inside me that immediately struck me that this is exactly the kind of thing that is likely to happen when you don’t invest in proper trust and security before launching products.”

Professor McGlynn emphasized that Google and other AI Developers should implement stronger security measures before releasing their devices and address problems, condemning the industry’s rush to produce the technology.
he told Independent: “Google says that this type of content does not come under their terms and conditions. Supposedly they do not allow content containing graphic violence and sexual violence etc., yet this could be produced.
“What this tells me is that they didn’t care enough about it, they didn’t have enough guardrails to stop it.”
He said the fact that this content could be shared on YouTube, a popular platform for youth, is worrying as he fears it could normalize the behaviour.
YouTube, which is owned by Google, said in a statement that its generator AI follows user prompts, and the channel was terminated for violating its terms of service. The channel was already removed.

Generative AI’s policies state that users should not engage in sexually explicit, violent, hateful or harmful activities, or generate or distribute content that promotes violence or incites violence. Google did not respond to questions about how many videos of this nature were created using its AI.
Alexandra Deak, a researcher at the Child Online Harms Policy Think Tank, believes the issue should be treated as a public health priority.
He said: “The fact that AI-generated violent content of this nature can be created and shared so easily is extremely worrying. For children and young people, exposure to this content can have lasting effects, including effects on mental health and well-being.”
Ms Deak said new threats continue to emerge online, and “the scale of exposure to violent, sexually explicit, or AI-generated harmful content is so large that it cannot be left to parents alone”.
of britain Internet Watch Foundation 17 incidents of AI-generated child sexual abuse material, which have not been named, have been identified on the chatbot website since June.

It said users were able to interact with chatbots that simulated “disgusting” sexual scenarios with children, some of whom are as young as seven years old.
Olga Juraz, a law professor and director of the Center for Protecting Women Online, said the videos “contribute hugely to perpetuating a culture of sexism, of misogyny, a world where gender stereotypes flourish and are propagated, where women are considered inferior, who can be beaten, who can be violated, whose dignity doesn’t really matter”.
Dr. Juraz said an increasing number of videos depicting extreme violence against women are being created and circulated online, often encouraging similar acts of violence on and off the Internet.
“It’s a big problem when we see AI-generated videos or images that depict sexual violence and sexual harassment against women,” she said.

A spokesperson for the Department for Science, Innovation and Security said: “This Government is committed to doing everything possible to end violence against women and girls – including online-based gender violence – which is why we have launched an unprecedented mission to halve it within a decade.
“Social media sites, search engines and AI chatbots covered under the Online Safety Act must protect users from illegal content such as extreme sexual violence and content harmful to children, including AI-generated content.
“In addition to criminalizing the creation of non-consensual intimate images, we have also introduced laws that would make it illegal to possess, create or distribute AI tools designed to generate heinous child sexual abuse material.”
Online Security Act came into force Launched in March this year after a long delay, it is designed to make the Internet “safer”, especially for children. A central element of the Bill is the new duties imposed on social media firms and the powers given to Ofcom to enforce them.
Earlier this year, Prime Minister Sir Keir Starmer vowed to make Britain an AI “superpower,” promising breakthroughs it would export to the rest of the world. Many advances have been made since then, especially in the health sector, where it has been rapidly adopted.
Last month, AI giant Nvidia announced an £2 billion investment in UK AI sector As part of the “biggest tech deal ever” between the UK and the United States.