‘No safety rules’: anxiety increases because AI-related videos hate online

It first appears to be a bizarre video clip generated by artificial intelligence to make people laugh.

In this, a stagnant bigfoot is wearing a shepherd hat and a vest with a American flag sits behind the wheel of a pickup truck.

“We are going to the LGBT parade today,” the APlic creature says with a laugh. “You will like it.”

Things then take a violent and disturbing turn, as the Bigfoot drives the people through the crowd, some of them keep the rainbow flags.

The clip posted in June on the Americanbegfoot Tiktok page has seen more than 360,000 times and hundreds of comments, most of them appreciated the video.

Similar AI-related materials have flooded social media platforms in recent months, openly promoted violence and spread hatred against members of LGBTQ+, Jewish, Muslim and other minority groups.

While the origin of most of those videos is not clear, their spread on social media is creating resentment and concern between experts and advocates who say that Canadian rules cannot keep up with the speed of disgusting AI-generated content, nor do they sufficiently address risks for public safety.

LGBTQ+ Advoche Organization Agel Canada says that the community is concerned about the rise of transfobic and homophobic misinformation materials on social media.

Executive Director Helen Kennedy said in a statement, “These AI devices are being armed to defame and discredit the variety and the current digital security laws are failing to address the scale and speed of this new threat.”

Kennedy said that rapidly developing technology has given bad actors a powerful tool to spread misinformation and hatred, transgender individuals have been inconsistently targeted.

“From deepfeek videos to algorithm-driven amplification of hate, horms are not artificial- they are real.”

ALSO READ  Summer McIntosh won the first gold medal in Worlds in his search for five

LGBTQ+ community is not the only goal, said Ivan Balgard, executive director of Canadian Anti-Hate Network. He said that Islamophobic, Antisement and Anti-South Asian materials have been built with liberal AI tools.

“When they create an atmosphere, where there is a lot of violence towards those groups, it commits violence towards groups that are more likely on the person or roads,” Balgard warned in a phone interview.

He said that Canada’s digital safety laws were already lagging behind and progress in AI has made things even more complicated.

“We have no security rules on talking about social media companies, we have no way to keep them accountable.”

Andrea Slen, a legal study of Ontario Tech University, said that in January, bills died for the purpose of addressing the harmful online material to cross the Parliament in January and setting up a regulator AI structure, who have done extensive research on online security.

Slenn said that the government needs to take another look at the online horms law and reproduce the bill “immediately”.

“I think Canada is in a situation where they really need to move only,” he said.

Justice Minister Scene Fraser told Canadian Press in June that the federal government will look at the online horms Act, but it has not decided whether to re -write or re -present it. Among other things, the objective of the bill was attributed to social media platforms to reduce contact with harmful materials.

A spokesman for the new Cret Ministry of Artificial Intelligence and Digital Innovation said that the government is taking the issue of AI-borne disgusting material seriously, especially when it targets weak minority groups.

ALSO READ  The Toronto Police identified a 14 -year -old man in stabbing Shahnaz Pestonji's stabbing

Sophia Olis said that the current laws provide “important security”, but admitted that when they were designed, they did not target to address the danger of generic AI.

He said in a statement, “There is a real need to understand how the AI device is being used and being misused – and we can strengthen the railing.” “That work continues.”

The work involves reviewing existing outlines, monitoring court decisions and closely listening to both legal and technical experts, “Olystis said. He said that Prime Minister Mark Carney’s government has also committed to make the distribution of non-intelligent sexual deepening as a criminal offense.

He said, “In this rapidly growing place, we believe that it is better to move the regulation very quickly and to correct it than being wrong,” he said, seeing that Ottawa wants to learn from the European Union and the United Kingdom.

Slenn stated that the European Union has been ahead of others in regulating AI and ensuring digital security, but despite being “at the forefront”, there is a feeling that needs to be more.

Experts say that regulating the material distributed by social media giants is particularly difficult because those companies are not Canadians. Another complex factor is the current political climate to the south of the border, where American tech companies are looking at less rules and restrictions, making them “feeling more powerful and less responsible, Salan said.

Although the generative AI is around for a few years, there is a “success” in recent months, using devices that have made it easier to produce good quality videos that are mostly available for free or at a very low price, said Peter Lewis, Canada Research Chair in reliable Artificial Intelligence.

ALSO READ  Botter accused after being seriously injured in Muskoka

He said, “I have to say that it is really accessible to almost anyone, in which there is a small part of technical knowledge and access to the right tool right now,” he said.

Lewis, who is also an assistant professor at Ontario Tech University, said that big language models like chatgip have implemented security measures in an attempt to filter harmful or illegal materials.

But to make such railings, there is a need to do more in video space, he said.

“You and I could watch the video and perhaps fear,” he said, “it is not clear that it is not clear what it is made in the AI system, the ability to reflect on it.”

Lewis said that when he is not a legal expert, he believes that existing laws can be used to combat the online pride of hatred and violence in Americanbegfoot videos. But he added the rapid growth of generative AI and comprehensive availability of new equipment “calls for new technical solutions” and cooperation to address cooperation between government, consumers, advocates, social platforms and AI app developers.

“If these things are being uploaded … then we need these things to be a really strong responsive flag behavior mechanism to be able to get these things as soon as possible from the Internet,” he said.

Lewis stated that using AI tools to detect and flag such a video helps help, but it will not solve the issue.

“Due to the nature of the way the AI system works, they are probable, so they do not catch everything.”

Join WhatsApp

Join Now