Meta implements new policy to combat deepfakes; AI-driven diabetes program revolutionizes health management; AI-powered diabetes program revolutionizes health management; Study reveals racial bias in AI chatbots—and how we use it every day More in the roundup. Let’s take a look.

1. Meta implements new policies to combat deep fakes

Facebook parent company Meta has implemented new policies against deepfakes and manipulated media. It will introduce an “AI-made” label for AI-generated content, expanding to include videos, images and audio. Additionally, more prominent labels will identify manipulative content that poses a high risk of deception. According to one person, Yuan is moving away from content removal and toward transparency, aiming to let viewers understand how content is created. Report Reuters reports.

Also read: X expands access to Grok chatbot for premium subscribers amid competition

2. China may abuse artificial intelligence in global elections; Microsoft warns

Microsoft claims that China may use artificial intelligence-generated content on social media to influence elections in countries such as India and the United States. While the immediate impact is modest, China’s increasing use of artificial intelligence to manipulate content poses long-term risks. According to the latest report from Microsoft Threat Analysis Center, North Korea is also suspected of using artificial intelligence to enhance its operations and engage in cyber crimes, PTI reported.

Also read: Google launches new tool to identify unknown callers directly through Pixel mobile app

3. Artificial intelligence-driven diabetes projects revolutionize health management

A breakthrough diabetes program powered by artificial intelligence has been endorsed by experts to provide personalized advice to combat the chronic metabolic disease, especially during religious fasts. TWIN Health’s innovative whole-body digital twin technology creates personalized nutrition plans to help control blood sugar. Experts hail it as revolutionary and has the potential to transform diabetes management, especially during fasting periods like Ramadan, by providing data-driven insights for informed health decisions, PTI report.

4. Research reveals racial bias in AI chatbots

A study from Stanford Law School warns that artificial intelligence chatbots exhibit racial bias, preferring white names to black ones. For example, a job candidate named Tamika might receive a lower salary recommendation than a candidate named Todd. The study highlights the inherent risk of bias in AI, particularly in the hiring process, as companies integrate AI into their operations, potentially perpetuating stereotypes and inequities, USA Today report.

Also read: Alexa, start barking: Clever 13-year-old girl saves herself and her sister from monkey attack in UP

5. Mumbai professor falls victim to AI-driven police impersonation scam

A Mumbai professor loses $A fraudster who impersonated a police officer used artificial intelligence to collect personal information from social media received Rs 1 lakh in compensation. The scammer claimed her son was detained and threatened arrest if the money was not transferred. Cyber ​​experts have warned that artificial intelligence-based fraud is on the rise and is based on emotions. Police are investigating cases involving this new modus operandi, Times of India report.

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in