Skip to content

Meta uses artificial intelligence to protect teens from ‘sextortion’ scams on Instagram

By | Published | No Comments

Meta uses artificial intelligence to protect teens from 'sextortion' scams on Instagram

About 3,000 young people in the U.S. fell victim to sexual exploitation scams in 2022

Washington:

Meta said on Thursday it was developing new tools to protect teenage users from “sextortion” scams on its Instagram platform, which has been accused by U.S. politicians of harming teenagers’ mental health.

Criminal gangs operate sextortion scams by convincing people to provide explicit photos of themselves and then threatening to make them public unless they receive money.

Meta said it is testing an artificial intelligence-powered “nudity protection” tool that can find and blur images containing nudity sent to minors through the app’s messaging system.

“This way the recipient is not exposed to unwanted, intimate content and can choose whether to see the image,” Capucine Tuffier, head of child protection at Meta France, told AFP.

The US company said it would also provide advice and safety tips to anyone sending or receiving such messages.

According to U.S. authorities, approximately 3,000 young people in the United States fell victim to sexual exploitation scams in 2022.

In addition, more than 40 states in the United States began suing Meta in October, accusing the company of “profiting from the suffering of children.”

Legal documents allege that Meta exploited young users by creating a business model designed to maximize the time they spent on the platform, even though it harmed their health.

“On-Device Machine Learning”

Meta announced in January that it would roll out measures to protect those under 18, including tightening content restrictions and strengthening parental monitoring tools.

The latest tool builds on “our long-standing work to help protect young people from unnecessary or potentially harmful exposure,” the company said Thursday.

“We’re testing new features to help protect young people from sextortion and intimate image abuse, and to make it harder for would-be scammers and criminals to find and interact with teens,” the company said.

It added that the “Nudity Protection” tool uses “on-device machine learning” – a type of artificial intelligence – to analyze images.

The company, which has also been regularly accused of violating users’ data privacy, has stressed that it does not access the images unless users report them.

Meta said it would also use artificial intelligence tools to identify accounts sending offending material and severely limit their ability to interact with younger users on the platform.

Former Facebook engineer turned whistleblower Frances Haugen has disclosed internal 2021 research at Meta (then known as Facebook) that showed the company was early aware of the impact its platform was having on young people’s mental health. Danger.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in

Pooja Sood, a dynamic blog writer and tech enthusiast, is a trailblazer in the world of Computer Science. Armed with a Bachelor's degree in Computer Science, Pooja's journey seamlessly fuses technical expertise with a passion for creative expression.With a solid foundation in B.Tech, Pooja delves into the intricacies of coding, algorithms, and emerging technologies. Her blogs are a testament to her ability to unravel complex concepts, making them accessible to a diverse audience. Pooja's writing is characterized by a perfect blend of precision and creativity, offering readers a captivating insight into the ever-evolving tech landscape.