For those attending the trend-setting tech festival here, the scandal that erupted after Google’s Gemini chatbot concocted images of black and Asian Nazi soldiers was seen as a warning about the power artificial intelligence can give tech giants. Google CEO Sundar Pichai last month blasted his company’s Gemini AI app for “completely unacceptable” bugs, after it featured missteps such as images of multiracial Nazi troops. forcing the app to temporarily stop users from creating photos of people.

Social media users ridiculed and criticized Google’s historically inaccurate images, such as those showing black female U.S. senators from the 1800s, when the first such senator was not elected until 1992.

“We really messed up on image generation,” Google co-founder Sergey Brin said at a recent artificial intelligence “hackathon,” adding that the company should have tested Gemini more thoroughly.

Also read: Ownership of content in the age of artificial intelligence

People interviewed at the South by Southwest arts and technology festival in Austin said Gemini’s failure highlighted the outsized power that a handful of companies have over artificial intelligence platforms that promise to change the way people live and work.

“Essentially, it’s too ‘woke,’” said lawyer and tech entrepreneur Joshua Weaver, meaning Google has gone too far when it comes to embodying inclusion and diversity.

Charlie Burgoyne, CEO of Valkyrie Applied Science Laboratory in Texas, said Google quickly corrected the error, but the underlying problem remains.

He compared Google’s fix to Gemini to putting a Band-Aid on a gunshot wound.

Weaver noted that while Google has long had time to perfect its product, it’s now in an AI race with Microsoft, OpenAI, Anthropic and others, adding, “They’re moving faster than they know how to move.”

See also  Google signs licensing agreement to train its AI models on Reddit content

Mistakes made in promoting cultural sensitivity are hot-button issues, especially given America’s tense political divisions, exacerbated by Elon Musk’s X platform (formerly Twitter).

Also Read: McDonald’s Blackout!Big Mac turns into a technology giant, but there are also many problems

“People on Twitter are very happy to celebrate anything embarrassing that happens in tech,” Weaver said, adding that the reaction to the Nazi gaffe was “a little over the top.”

However, he insisted the incident did raise questions about the degree of control those using AI tools have over information.

Weaver said that over the next decade, the amount of information (or misinformation) generated by AI may dwarf the amount of information generated by humans, meaning those who control AI security measures will have an outsized impact on the world.

Bias input, bias output

Karen Palmer, the award-winning mixed reality creator at Interactive Films Ltd., said she could imagine a future where someone gets into a self-driving taxi and “if the AI ​​scans you and that there are any outstanding violations against you”… you will be taken to the local police station,” rather than your intended destination.

Artificial intelligence is trained on large amounts of data and can be used to handle an increasing number of tasks, from image or audio generation to determining who gets a loan or whether a medical scan detects cancer.

But this data comes from a world rife with cultural bias, disinformation, and social inequality—not to mention online content that may include casual chats between friends or intentionally exaggerated and provocative posts—and AI models can respond to these flaws.

See also  OTT Releases This Week: Rebel Moon Part 2, Chapter 370 and More

With Gemini, Google engineers are trying to rebalance the algorithm to deliver results that better reflect human diversity.

But the effort backfired.

“It’s really tricky, nuanced and nuanced to figure out where the bias is and how to include it,” said technology lawyer Alex Shahrestani, managing partner at Promise Legal, a law firm that specializes in serving technology companies.

He and others argue that even well-intentioned engineers involved in AI training can’t help but bring their own life experiences and subconscious biases into the process.

Valkyrie’s Burgoyne also criticized big tech companies for hiding the inner workings of generating artificial intelligence in “black boxes” so users can’t detect any hidden biases.

“The capabilities of the output are far beyond our understanding of the methods,” he said.

Experts and activists are calling for more diversity in the teams that create artificial intelligence and related tools, and more transparency into how they work — especially when algorithms override user requests to “improve” results.

Jason Lewis of the Indigenous Futures Resource Center and related groups here says one challenge is how to appropriately build the perspective of the world’s many and diverse communities.

At Indigenous AI, Jason works with far-flung indigenous communities to design algorithms that use their data ethically while reflecting their view of the world, something he doesn’t always see in the “arrogance” of big tech leaders. have witnessed.

He told a group that his own work “stands in stark contrast to the rhetoric in Silicon Valley, where there’s this top-down ‘Oh, we’re doing this because we’re going to benefit all of humanity’ bullshit, right?”

See also  Apple may use Google Gemini to power AI features on iPhone: report

Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in

Follow Us on