Add thelocalreport.in As A Trusted Source
If you believe that artificial intelligence poses serious risks to humanity, a professor at Carnegie Mellon University has one of the most important roles in the tech industry right now.
Zico Colter leads 4-person panel OpenAI It has the right to stop the release of a new AI system from the ChatGPIT creator if it is found to be unsafe. That technology could be so powerful that a rogue could use it to create weapons of mass destruction. It could also be a new chatbot that is so poorly designed that it will harm people’s mental health.
“We’re not just talking about existential concerns here,” Colter said in an interview with The Associated Press. “We’re talking about all of the safety and security issues and important topics that come up when we start talking about these widely used AI systems.”
OpenAI appointed the computer scientist as chair of its safety and security committee more than a year ago, but the position took on increased significance last week when California And Delaware regulators made Coulter’s oversight a key part of their agreements to allow OpenAI to more easily raise capital and create a new business structure to turn profits.
Security has been central to OpenAI’s mission since it was founded a decade ago as a nonprofit research lab with the goal of building better-than-human AI that benefits humanity. But after the release of ChatGPIT led to a global AI commercial boom, the company has been accused of bringing products to market before they were completely safe in order to stay ahead of the race. Internal divisions that led to the temporary ouster of CEO Sam Altman in 2023 brought concerns that it had deviated from its mission.
San Francisco-based organization faces backlash — including lawsuit from co-founder Elon Musk – When it began to take steps towards transforming itself into a more traditional for-profit company in order to advance its technology.
The agreements announced last week by OpenAI with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings were intended to address some of those concerns.
At the heart of the formal commitments is a promise that decisions about safety and security should come before financial considerations as OpenAI creates a new public benefit corporation that is technically under the control of its non-profit OpenAI Foundation.
Coulter will be a member of the nonprofit’s board, but not the for-profit’s board. But according to Bonta’s memorandum of understanding with OpenAI, it will have “full observation rights” to attend all for-profit board meetings and access to information about AI safety decisions. Coulter is the only person other than Bonta named in the lengthy document.
Coulter said the agreements largely confirm that his security committee, formed last year, will retain the officials it already had. Three other members also sit on the OpenAI board – one of them a former US Army general Paul NakasoneWho was the commander of US Cyber Command. Altman resigned from the security panel last year in what was seen as giving it more independence.
“We have the ability to do things like delay model release requests until some mitigation is completed,” Colter said. He declined to say whether the security panel had ever had to pause or reduce a release, citing the confidentiality of its proceedings.
Coulter said there will be a variety of concerns about AI agents to be considered in the coming months and years, from cybersecurity to “could an agent that encounters some malicious text on the Internet accidentally exfiltrate data?” – For security concerns surrounding AI model weights, which are numerical values that affect the performance of an AI system.
“But there are also topics that are either emerging or really specific to this new class of AI models that have no real analog in traditional security,” he said. “Do the models enable malicious users to have much greater capabilities when it comes to things like designing biological weapons or carrying out malicious cyberattacks?”
“And then ultimately, there’s the impact of AI models on people,” he said. “The effects on people’s mental health, the effects of people interacting with these models and what it can cause. I think all of these things need to be addressed from a safety standpoint.”
OpenAI has already faced criticism this year over the behavior of its flagship chatbot, including a wrongful death lawsuit from California parents whose teenage son killed himself in April after a lengthy conversation with ChatGPT.
Colter, director of Carnegie Mellon’s machine learning department, began studying AI as a freshman at Georgetown University in the early 2000s, long before it was fashionable.
“When I started working in machine learning, it was an esoteric, niche area,” he said. “We called it machine learning because no one wanted to use the term AI because AI was an old field that promised a lot and didn’t deliver.”
Colter, 42, has been following OpenAI for years and was so close to its founders that he attended its launch party at an AI conference in 2015. Still, they didn’t expect how fast AI would advance.
“I think very few people, even people working deeply in machine learning, really anticipated the current situation we’re in, the explosion of capabilities, the explosion of risks that are emerging at this moment,” he said.
AI security advocates will be keeping a close eye on OpenAI’s restructuring and Coulter’s work. One One of the company’s sharpest critics says he is “cautiously optimistic,” especially if Coulter’s group is “able to actually hire employees and take a stronger role.”
“I think he has the kind of background that’s a good fit for this role. He seems like he’s a good choice to run it,” said Nathan Calvin, general counsel at the small AI policy nonprofit Encode. Kelvin, who was targeted by OpenAI with a subpoena at his home as part of its fact-finding to defend against the Musk lawsuit, said he wants OpenAI to stay true to its original mission.
“Some of these commitments could be a really big deal if board members take them seriously,” Calvin said. “They may also just be words on paper and completely separate from anything that’s actually happening. I guess we just don’t know which of those we’re in yet.”
