Paris:
Google said on Thursday it would block users from creating images of people on its new artificial intelligence tool after the program depicted Nazi-era troops as people from different racial backgrounds.
The U.S. tech giant only released an improved Gemini AI in parts of the world on February 8, and said it was “working to resolve recent issues” with its image-generating capabilities.
“While we do this, we are pausing the generation of people images and will re-release an improved version soon,” the company said in a statement.
Two days ago, a user on X (formerly Twitter) posted an image showing the results of Gemini when prompted to “Generate an image of a German soldier in 1943.”
According to X user John L., the AI generated four images of soldiers — one of a white man, one of a black man, and two of a woman of color.
Tech companies see artificial intelligence as the future of everything from search engines to smartphone cameras.
But artificial intelligence programs — and not just those developed by Google — have been widely criticized for racial bias in their results.
“@GoogleAI has a fixed diversity mechanism that someone didn’t think about or test well,” John L wrote on X.
Big tech companies are often accused of rushing to launch AI products before they have been properly tested.
Google has a checkered history of launching artificial intelligence products.
Last February, the company apologized after an ad for its new Bard chatbot showed the program incorrectly answering a basic question about astronomy.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)