Artificial intelligence (AI) can be a “sword and shield” against harmful content, rather than just a tool for spreading it, Sir Nick Clegg has said.

The former Lib Dem deputy prime minister is now head of global affairs at tech giant Meta, the parent company of Facebook, Instagram and WhatsApp.

Speaking at an AI event at Meta’s London offices, Sir Nick said that while it was “right” to be “vigilant” about generative AI being used to create disinformation to disrupt elections, he said AI was Meta’s The biggest reason for success is to better reduce the spread of “bad content” on its platform.

I urge everyone… to think of AI as a sword, not just a shield, when dealing with objectionable content

Sir Nick Clegg

In 2024, billions of people will take part in elections in the world’s largest democracies, including the UK, US and India.

This has led some experts to warn of the potential threats posed by the rapid rise of generative AI tools, including image, text and audio content applications, and how they could be used to spread misinformation and disinformation to undermine democratic processes .

A number of senior British politicians have been the subject of so-called “deepfakes” that have been spread on social media.

On Tuesday, fact-checking charity Full Fact said the UK was currently vulnerable to misinformation and the government needed to intervene more on the issue with the upcoming election.

Sir Nick said it was important to focus on the issue, but he believed good AI could effectively prevent bad AI and that Meta and others had the tools needed to combat the spread of harmful material.

See also  Police Scotland confirm JK Rowling’s online comments were not recorded as a non-criminal hate incident

“I urge everyone – yes, there are risks – but also to think of AI as a sword, not just a shield, when dealing with inappropriate content,” he said.

“If you look at Meta, the largest social media platform in the world, there’s only one biggest reason why we’re getting better and better at reducing unwanted unwanted content on Instagram and Facebook: artificial intelligence.”

He added that using artificial intelligence to scan Meta’s platform to find and remove harmful content has reduced levels of objectionable content by 50% to 60% “over the past two years,” meaning that now “for every 10,000 bits of content, 1 bit “Content may be hate speech”.

“Several teams have been working internally at Meta to improve the way we classify content using state-of-the-art artificial intelligence tools to ensure that our 40,000 people responsible for content moderation are really focusing on the sharpest edge cases and they’re not wasting a lot of Time to focus on things that are harmless or don’t pose a problem and have really improved rapidly in recent months,” he said.

“It’s right that there is an increasing level of collaboration across the industry, especially this year because of the unprecedented number of elections.

“We should remain vigilant, but I urge you to also look at AI as a great tool to deal with this difficult situation, and I’m optimistic that the industry is trying to be as collaborative as possible to really commit to this.”

During the event, Sir Nick also announced that Meta’s next AI large-scale language model – used to power AI tools, including chatbots built by Meta and other companies – will be released soon.

See also  Report warns offshore wind projects could be at risk as steel demand rises

Sir Nick said the new model, known as the Llama 3, will start rolling out “within the next month and hopefully less” and will continue throughout the year.

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in