Add thelocalreport.in As A
Trusted Source
prince harry has joined with evangelical Christian leaders, famous actors, musicians and prominent conservative commentators in a new petition to ban the development of AI superintelligence.
The diverse group of signatories also includes leading computer scientists, who have issued advance warnings artificial intelligence The systems being developed by technology companies like Google, Meta and OpenAI threaten the future of humanity.
Organized by the Future of Life Institute, a nonprofit AI safety group that was once funded by Elon Musk, statement There have been calls for a moratorium on the development of superintelligence until two conditions are met.
The first is that there is “broad scientific consensus that it will be done in a safe and controllable manner”, and the second is that there is “strong public buy-in”.
Notable figures who have signed the latest statement include Donald Trump’s former political strategist Steve Bannon, musicians will.i.am and Kate Bush, actor and broadcaster Stephen Fry and British billionaire Sir Richard Branson.

“Many major AI companies have the stated goal of building superintelligence in the coming decade that can outperform all humans at essentially all cognitive tasks,” FLI said.
“This has raised concerns ranging from human economic obsolescence and disempowerment, loss of freedom, civil liberties, dignity and control, to national security risks and even possible human extinction.”
In a personal note, Prince Harry said: “The future of AI should serve humanity, not replace it. The true test of progress will not be how fast we move, but how intelligently we move.”
This is not the first time that public figures have attempted to halt the development of new artificial intelligence.
In March 2023, tech leaders including Mr Musk and Apple co-founder Steve Wozniak signed an open letter – also organized by the Future of Life Institute – which warned of “out-of-control” AI systems.
The same month, Mr Musk founded his own artificial intelligence firm called xAI, aiming to rival ChatGPT maker OpenAI in its goal of developing human-level AI.
Another high-profile statement was issued in May 2023, this time by the Center for AI Safety, which claimed that regulators and lawmakers should take the “serious risks” of advanced AI more seriously.
“Reducing extinction risk from AI should be a global priority, along with other societal-level risks such as pandemics and nuclear war,” read statementWhich was co-signed by Demis Hassabis and Sam Altman, CEOs of Google DeepMind and OpenAI.
AI development is not on pause yet, and the wide range of signatories in the latest statement is designed to attract people outside the AI research community.
“In the past, it’s mostly been nerds versus nerds,” said Max Tegmark, president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology.
“I think what we’re really seeing here is how criticism has become mainstream.”