Add thelocalreport.in As A
Trusted Source
aye-Generated images of starving children, refugees, victims of armed conflict and more are spreading on the internet.
New research by Dr. Arseny AlenichevAt the Institute of Tropical Medicine in Antwerp, a huge collection of pictures and video clips has been discovered for sale. They appear to depict the struggles of poor and vulnerable people around the world, but in reality, they are reductive fever dreams of generative AI.
They’re clearly marked as AI, so buyers aren’t being duped, but that only makes this trend more worrying: there’s an emerging market for AI-generated poverty porn.
A quick search on stock image websites turns up photos that are truly stunning. “Asian children” swim in rivers of garbage, skeletal infants stare hungrily at onlookers, refugees scream into the void or sit in filthy camps. And, of course, lots of white Westerners are coming to the rescue. Volunteers in baseball caps embrace grateful African children, picture-perfect 20-somethings pose in schools filled with smiling students, and foreign doctors in white coats bring modern medicine to the needy.
Some of the images are absolutely ridiculous. The caption of one reads: “Young mother cradles her baby amid the chaos of war in Africa”, which it advertises as “powerful imagery for awareness campaigns”. The backdrop to this scene of war-torn chaos is, inexplicably, a completely intact café.
We are at the foothills of AI imagery and so we should hope that such glaring mistakes will be ironed out quickly. As photographs become more sophisticated and photorealistic, it will become impossible to distinguish between real and fake and the temptation to use them will increase. But can we ever trust AI to create images that aren’t stereotypical, and is it ever ethical?
Alenichev’s earlier research exemplifies a widely held concern in computational science: that generative AI models can be highly racialized, and this is particularly visible in the images they create. Models are trained on billions of photos from our past
Present; Absorbing all our biases and prejudices in the process.
In recent years, many NGOs have adopted ethical storytelling guidelines that prevent them from using dehumanizing images of the people they serve. But in light of extreme funding cuts – from UK cuts by Keir Starmer aid budget With the near-complete elimination of the United States Agency for International Development (USAID)—many fundraising teams at charities large and small, they are under intense pressure to fill critical financial gaps.
Meanwhile, the need for effective visual communication has never been greater; The use of AI stock images for marketing can prove irresistible in these circumstances, to feed insatiable social media accounts, websites, fundraising appeals, etc.
After all, fake photos of fake people are cheap. There are no photographers to contract and no trips to arrange. There are no complicated consent processes to overcome, no safety risks to manage, no duty of care, no liability. no one is objecting
Exploited, right? Wrong. The people, places, communities and issues being represented are being objectified and I believe their experiences are being exploited.
If you believe, as I passionately do, in the urgent need for more ethical and respectful storytelling; Built with people, not about people, it is clear that reductive and stereotypical representations of AI can never be the answer.
The extent to which AI stock images are being used by charities and NGOs is unclear. We know that the use of AI for in-house photo editing is definitely on the rise.
There are some excellent examples of organizations using the technology to create powerful images for campaigns, which are clearly depicted as the product of AI. WWF’s “The Future of Nature” campaign presented scenes of environmental collapse in a series of AI-generated images. Breast Cancer Now’s project “Gallery of Hope” worked with AI to create portraits of women with incurable breast cancer as they imagined what their future would be like if they had more time.
These were carefully managed projects that required significant financial resources and expertise to achieve. The organizations involved were conscientious in their use of AI, overseen by photographers, AI experts, publicists and communications professionals.
The same care and attention are clearly not taken by so-called “AI artists” working under ridiculous pseudonyms, as they publish images of people and places they have undoubtedly never seen with their own eyes. If they even have eyes. If they are also human.
We should hope that charities recognize the threats to people’s representation, public perceptions, trust and reputation and refuse to use AI-generated stock images all together. Without buyers, the market for AI “poverty” porn will dry up and die out. As it should be.
Gareth Benest is Deputy Executive Director of the International Broadcasting Trust (IBT)
This article was produced as part of The Independent Rethinking global aid Project