By Surja | Published | No Comments
New research shows that artificial intelligence chatbots are more likely to recommend the death penalty when a person writes in African American English (AAE) compared to Standard American English.
AI AAE speakers are also more likely to be matched with less prestigious jobs.African American English is commonly spoken by African Americans and Canadians.
paperThis study, which has not yet been peer-reviewed, examines covert racism in artificial intelligence by looking at how models respond to different English dialects.
Most research on racism in AI focuses on overt racism, such as how AI chatbots respond to the word “black.”
Valentin Hofmann, one of the authors of the paper, told Sky News: “African-American English as a dialect triggers racism in language models that is more complex than humans have against African-Americans. Any stereotype of a person is more negative.”
“When you ask it openly, ‘What do you think of African Americans?’ it gives relatively positive attributes like ‘smart’, ‘enthusiastic’ and so on.
Read more from Sky News:
Giant volcano 280 miles across discovered on Mars
Computer scientist is not Bitcoin inventor Satoshi Nakamoto
Why Bitcoin pulled back sharply from all-time highs
“But when you look at these language models in relation to dialects or African American English, you see these very negative stereotypes surface.
“So what we show in this paper is that these language models have learned to hide their racism on the surface, but very old stereotypes are barely addressed on a deeper level.”
This content is provided by X, may use cookies and other technologies. In order to show you this content, we need your permission to use cookies.You can use the button below to modify your preferences to enable X cookies or allow these cookies only once.You can change your settings at any time by Privacy options.
Unfortunately, we cannot verify your consent X biscuit.To view this content you can use the button below to allow X Cookie used for this session only.
Developers are trying to address racism in artificial intelligence by adding filters to chatbots to stop them from saying offensive things. But it’s much harder to address covert racism caused by the order of sentences or the way slang is used.
Artificial intelligence is increasingly used in job interviews and screening, so biases in these systems can have real-world consequences.
There are also companies looking into how to use it in the legal system.
Follow us on Google news ,Twitter , and Join Whatsapp Group of thelocalreport.in
Surja, a dedicated blog writer and explorer of diverse topics, holds a Bachelor's degree in Science. Her writing journey unfolds as a fascinating exploration of knowledge and creativity.With a background in B.Sc, Surja brings a unique perspective to the world of blogging. Hers articles delve into a wide array of subjects, showcasing her versatility and passion for learning. Whether she's decoding scientific phenomena or sharing insights from her explorations, Surja's blogs reflect a commitment to making complex ideas accessible.