EU asks Facebook, TikTok to identify deepfakes ahead of June polls

TikTok has approximately 142 million monthly active users in the EU (representative)

The European Union on Tuesday called on Facebook, TikTok and other tech giants to use clear labels to combat deepfakes and other artificial intelligence-generated content ahead of Europe-wide polls in June.

The advice is part of a series of guidelines issued by the European Commission under a landmark content law to help digital giants combat electoral risks, including disinformation.

The EU executive has taken a series of measures to crack down on big tech companies, particularly when it comes to content moderation.

Its biggest tool is the Digital Services Act (DSA), under which the group designates 22 digital platforms as “very large,” including Instagram, Snapchat, YouTube and X.

Enthusiasm for artificial intelligence has been high since the launch of OpenAI’s ChatGPT in late 2022, but EU concerns about the technology’s harm are also growing.

Brussels is particularly concerned about the impact of Russian “manipulation” and “disinformation” on the elections to be held from June 6 to 9 across the 27 EU member states.

In new guidance, the commission said the largest platforms “should assess and mitigate specific risks associated with AI, for example by clearly labeling AI-generated content (such as deepfakes)”.

The committee recommended that large platforms promote official information about the election and “reduce the monetization and virality of content that threatens the integrity of the electoral process” to reduce any risks.

Thierry Breton, the EU’s top technology enforcer, said: “With today’s guidance, we will make full use of all the tools available to us from the DSA to ensure that platforms comply with their obligations and are not misused to manipulate our elections,” At the same time, freedom of speech is guaranteed.”

See also  TikTok and its ‘secret weapon’ find themselves caught in US-China dispute

While the guidelines are not legally binding, platforms must explain what other “equally effective” measures they are taking to limit risks if they are not followed.

The EU can ask for more information and regulators can investigate companies if they don’t believe they are fully compliant, which could lead to hefty fines.

“Trusted” information

Under the new guidance, the commission also said political ads “should be clearly labeled political” before stricter laws on the issue come into force in 2025.

It also urged platforms to put in place mechanisms to “reduce the impact of incidents that could have a significant impact on election results or turnout.”

The EU will conduct a “stress test” on relevant platforms in late April.

X has been under investigation since December for content moderation issues.

On March 14, the committee urged Facebook, Instagram, TikTok and four other platforms to provide more information on how they address artificial intelligence risks in polling.

Over the past few weeks, several companies, including Meta, have outlined their plans.

TikTok on Tuesday announced additional steps it is taking, including starting push notifications in April directing users to find more “trusted and authoritative” information about the June vote.

TikTok has about 142 million monthly active users in the EU and is increasingly used by young people as a source of political information.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in

Follow Us on