White House 'concerned' after Taylor Swift, Joe Biden deepfakes surface online

Nearly 500 videos referencing Taylor Swift were hosted on a top deepfake site (File)

Washington:

Deepfakes generated by artificial intelligence have spread rapidly on social media this month, claiming several high-profile victims and heightening the risks of manipulated media influencing the public conversation ahead of the impending US election cycle.

Indecent photos of singer Taylor Swift, a robocall featuring the voice of US President Joe Biden, and videos of dead children and teenagers describing their deaths have all gone viral – but not a single one of them was real.

Deceptive audio and visuals created using artificial intelligence are not new, but recent advances in AI technology have made them easier to create and harder to detect. A spate of highly publicized events in just a few weeks of 2024 have raised concerns about the technology among lawmakers and regular citizens.

“We are concerned by reports of the spread of false images,” White House press secretary Karine Jean-Pierre said Friday. “We are going to do whatever we can to deal with this issue.”

At the same time, the spread of AI-generated fake content on social networks has offered a stress test for platforms’ ability to monitor them. On Wednesday, Swift’s apparent AI-generated deepfake images were viewed millions of times on X, formerly known as Twitter, which is owned by Elon Musk.

Although sites like X have rules against sharing synthetic, manipulated content, it took hours for the post featuring Swift to be removed. According to the Verge, one remained active for nearly 17 hours and has been viewed more than 45 million times, a sign that these photos can go viral long before action is taken to stop them.

cracking down

Companies and regulators have a responsibility to prevent the “distorted customer journey” of pornographic manipulated content, said Henry Ajder, an AI expert and researcher who has advised governments on legislation against deepfake pornography. We “need to recognize that different stakeholders, whether they be search engines, tool providers or social media platforms, can do a better job of creating friction in the process from creating someone’s idea to actually creating and sharing content.” Are.”

The Swift episode sparked outrage from her fans and others on X, leading to the phrase “Protect Taylor Swift” trending on the social platform. This is not the first time that the singer has faced apparent AI manipulation of her image, although it is the first time with this level of public outcry.

See also  Zulfikar Ali Bhutto hanged, didn’t get fair trial: Pakistan’s Supreme Court

According to a Bloomberg review, the top 10 deepfake websites hosted nearly 1,000 videos referencing “Taylor Swift” in late 2023. Internet users superimpose porn performers’ faces onto their bodies or provide paying customers with the ability to “nude” their victims using AI technology.

According to a 2023 Bloomberg report, many of these videos are available through a quick Google search, which has been the primary traffic driver for deepfake websites. While Google does provide a form for victims to request removal of deepfake content, many complain that the process resembles a game of whack-a-mole. At the time of Bloomberg’s report last year, a Google spokesperson said Alphabet Inc. The company designs its search ranking system to avoid shocking people with unexpected harmful or explicit content they may not want to see.

Nearly 500 videos referencing Swift were hosted on the top deepfake site, Mr Deepfakes.com. According to SimilarWeb data, the site received 12.3 million visits in December.

targeting women

Ajdar said, “This case is horrific and no doubt extremely upsetting for Swift, but sadly it is not as revolutionary as some may think.” “Now the ease of creating this content is disturbing and impacting women and girls, no matter where they are in the world or what their social status is.”

As of Friday afternoon, Swift’s apparent AI-generated images were still up on X. A spokesperson for the platform referred Bloomberg to the company’s current statement, which said non-consensual nudity is against its policy and the platform is actively trying to remove such images. ,

Users of the popular AI image-maker MidJourney are already taking advantage of at least one simulated scene of a swift to come up with written prompts that can be used with AI to create more clear images, as Reviewed by the MidJourney Discord channel. Bloomberg. MidJourney has a feature in which people can upload an existing image to its Discord chat channel – where prompts are input to tell the tech what to draw – and it will generate text that can be used in MidJourney or another similar The medium can be used to create another image similar to it. Service.

The output of that feature is on a public channel for any of the more than 18 million members of MidJourney’s Discord server, giving them the equivalent of tips and tricks for fixing AI-generated porn imagery. On Friday afternoon, about 20 lakh people were active on the server.

See also  Donald Trump, Joe Biden’s fundraising battle heats up, raising $50 million

MidJourney and Discord did not respond to requests for comment.

growing number

Amid the AI ​​boom, the number of new pornographic deepfake videos has already increased more than ninefold since 2020, according to research by independent analyst Genevieve Oh. At the end of last year, the top 10 sites offering this content hosted 114,000 videos, of which Swift was already a common target.

“Whether it’s AI or real, it still hurts people,” said Heather Mahalik Barnhart, a digital forensics expert who developed the curriculum for the cyber education organization SANS Institute. With the images Swift wrote, “Even though it’s fake, imagine the minds of her parents who had to see it – you know, when you see something, you can’t turn it away.”

Just days before Swift’s photos caused a stir, a deepfake audio message from Biden was spread ahead of the New Hampshire presidential primary. Global disinformation experts said the robocall, which sounded like Joe Biden telling voters to skip the primaries, was the most dangerous deepfake audio they had ever heard.

There are already concerns that deepfake audio or video could play a role in the upcoming elections, fueled by how quickly things spread on social media. The fake Biden message was dialed directly into people’s telephones, leaving little means for hopefuls to investigate the calls.

“The New Hampshire primary gives us our first taste of the situation we have to deal with,” said Siwei Lu, a University at Buffalo professor who specializes in deepfakes and digital media forensics.

hard to figure out

Even on social media, there are currently no reliable detection capabilities, leaving a frustratingly roundabout process that requires a person seeing a piece of content and being suspicious enough to It depends on one to go to the source to confirm it. This is probably a more likely scenario for a prominent public figure like Swift or Biden than for a local official or private citizen. Even if companies identify and remove these videos, they spread so fast that many times damage is done.

A viral deepfake video of Shani Louk, a victim of the October 7 terrorist attack on Israel, has been viewed more than 7.5 million times on ByteDance Ltd.’s TikTok app since it was posted more than three months ago, even That’s even after Bloomberg singled it out for the company in a December story about the platform’s struggle to police AI-generated videos of dead victims, including children.

The video-sharing app has banned AI-generated content featuring private citizens or children, and said “grisly” or “disturbing” videos are also not allowed. As recently as this week, deepfake videos detailing the abuse and deaths of dead children were still appearing in users’ feeds and receiving thousands of views. TikTok removed videos sent to Bloomberg for comment. As of Friday, dozens of videos and accounts that specifically post such disturbing fake content are still live.

See also  Joe Biden expresses Holi wishes

TikTok has said it is investing in detection technologies and working to educate users about the dangers of AI-generated content. Other social networks have also expressed similar sentiments.

“You can’t react to something, you can’t react to something — let alone regulate something,” said Nick Clegg, president of public affairs at Meta Platform Inc., which owns Facebook and Instagram. -If you can’t figure it out first.” At the World Economic Forum in Davos, Switzerland earlier this month.

some laws

There is currently no US federal law banning deepfakes, including those that are pornographic in nature. Some states have enacted laws regarding deepfake pornography, but their application is inconsistent across the country, making it difficult for victims to hold creators accountable.

White House press secretary Jean-Pierre said Friday that the administration is working with AI companies on one-sided efforts that would generate watermarks to make fake images easier to spot. Biden has also appointed a task force to address online harassment and abuse, while the US Justice Department has created a hotline for victims of image-based sexual exploitation.

Congress has begun discussing legislative steps to protect the voices of celebrities and artists from the use of AI in some cases. Any protection for private citizens is absent from those conversations.

Swift has not made any public comment on the issue, including whether she will take legal action. If she chooses to do so, she may be in a position to take on that kind of challenge, said Sam Gregory, executive director of Witness, a nonprofit that uses ethical technology to expose human rights abuses. Is.

“In the absence of federal legislation, having a plaintiff like Swift who has the ability and willingness to move forward using all available means to make her case – regardless of whether the chances of success are short or long-term – is a next step,” Gregory said.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)