Add thelocalreport.in As A
Trusted Source
Prince Harry and his wife meghan Prominent computer scientists, economists, artists, evangelists have joined Christian leader and American Conservative commentators Steve Bannon and Glenn Beck have called for a ban on AI “superintelligence” that pose a threat to humanity.
The letter, released Wednesday by a group of politically and geographically diverse public figures, is aimed directly at tech giants like Google. OpenAI and meta platforms that are competing with each other to build artificial intelligence designed to surpass humans in many tasks.
The letter states that restrictions will be imposed until certain conditions are met.
The 30-word statement says:
“We call for a ban on the development of superintelligence, which will not be lifted before there is broad scientific consensus that it will be done safely and controllably, and there is strong public buy-in.”
In the introduction, the letter states that AI tools can bring health and prosperity, but along with those tools, “many major AI companies have the stated goal of building superintelligence in the coming decade that can outperform all humans in essentially all cognitive tasks. This has raised concerns ranging from human economic obsolescence and disempowerment, loss of freedom, civil liberties, dignity and control, to national security risks and even potentially human extinction.” Extinction.”
Who signed it and what are they saying about it
Prince Harry said in a personal note that “The future of AI should serve humanity, not replace it. I believe the true test of progress will not be how fast we move, but how intelligently we move. There is no second chance.”
Signed by the Duke of Sussex along with his wife Meghan, Duchess of Sussex.
Another signatory, Stuart Russell, an AI pioneer and professor of computer science at the University of California, Berkeley, wrote, “This is not a ban or even a moratorium in the usual sense.” “It is simply a proposal to require substantial safeguards for a technology that, according to its developers, has a significant chance of causing human extinction. Is that too much to ask?”
Also signing were AI pioneers Joshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science’s top prize. Hinton also won the Nobel Prize in Physics last year. Both have been vocal in drawing attention to the dangers of the technology they helped create.
But there are some surprises on the list, including Bannon and Beck, in an effort by the letter’s organizers at the nonprofit Future of Life Institute to appeal to President Donald Trump’s Make America Great Again movement, while Trump’s White House staff has sought to downplay the limits of AI development in the US.
Also on the list are Apple co-founder Steve Wozniak; British billionaire Richard Branson; former Chairman of the US Joint Chiefs of Staff Mike Mullen, who served under Republican and Democratic administrations; and Democratic foreign policy expert Susan Rice, who was President Barack Obama’s national security adviser.
Former Irish President Mary Robinson and several British and European MPs signed on, as did actors Stephen Fry and Joseph Gordon-Levitt and musician will.i.am, who have otherwise embraced AI in music production.
“Yes, we want specific AI tools that can help cure diseases, strengthen national security, etc.,” wrote Gordon-Levitt, whose wife Tasha McCauley served on OpenAI’s board of directors before the turmoil caused by the temporary ouster of CEO Sam Altman in 2023. “But does AI also need to mimic humans, groom our children, turn us all into worthless addicts, and make millions of dollars” Advertisement? “Most people don’t want that.”
Are concerns about AI superintelligence also fueling AI hype?
The letter is likely to spark an ongoing debate among the AI research community about the possibility of extraterrestrial AI, the technical pathways to reach it, and how dangerous it could be.
“In the past, it’s mostly been nerds versus nerds,” said Max Tegmark, president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology. “I think what we’re really seeing here is how criticism has become mainstream.”
Complicating the broader debate is that the same companies that are striving toward what some call superintelligence and others call artificial general intelligence, or AGI, are sometimes also enhancing the capabilities of their products, which could make them more marketable and has contributed to concerns about an AI bubble. OpenAI recently faced ridicule from mathematicians and AI scientists when its researcher claimed that ChatGPT had discovered unsolved math problems – when what it really did was find and summarize what was already online.
“There are a lot of things that are overhyped and you need to be careful as an investor, but that doesn’t change the fact that AI has grown much faster over the last four years than most people predicted,” Tegmark said.
Tegmark’s group was also behind a letter from March 2023 – still in the midst of a commercial AI boom – that called on tech giants to temporarily halt the development of more powerful AI models. None of the major AI companies heeded that call. And the most prominent signatory of the 2023 letter, Elon Musk, was at the same time quietly setting up his own AI startup to compete with those he wanted to give a 6-month pause.
When asked if he reached out to Musk again this time, Tegmark said he had written to the CEOs of all the major AI developers in the US, but did not expect them to sign on.
“To be honest, I really sympathize with them, because they’re so caught up in this race that they feel an incredible pressure to keep going and not get overtaken by the other guy,” Tegmark said. “I think that’s why it’s so important to destigmatize the superintelligence race, to the point where the U.S. government simply steps in.”