Telecommunication

Sam Altman Says OpenAI Doesn’t Fully Understand How ChatGPT Works


Sam Altman at the International Telecommunication Union (ITU) AI for Good Global Summi
Sam Altman recently disbanded OpenAI’s safety team and formed a new one led by himself. FABRICE COFFRINI/AFP via Getty Images

Just days after OpenAI announced it’s training its next iteration of GPT, the company’s CEO Sam Altman said OpenAI doesn’t need to fully understand its product in order to release new versions. In a live interview today (May 30) with Nicholas Thompson, the CEO of The Atlantic, at the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, Switzerland, Altman spoke about A.I. safety and the technology’s potential to benefit humanity. However, the CEO didn’t seem to have a good answer to the basic question of how GPT works. 

“We certainly have not solved interpretability,” Altman said. In the realm of A.I., interpretability—or explainability—is the understanding of how A.I. and machine learning systems make decisions, according to Georgetown University’s Center for Security and Emerging Technology. “If you don’t understand what’s happening, isn’t that an argument to not keep releasing new, more powerful models?” asked Thompson. Altman danced around the question, ultimately responding that, even without that full cognition, “these systems [are] generally considered safe and robust.”

“We don’t understand what’s happening in your brain at a neuron-by-neuron level, and yet we know you can follow some rules and can ask you to explain why you think something,” said Altman. By likening GPT to the human brain, Altman reasoned a black-box presence, or a sense of mystery behind its functionality. Like human brains, generative A.I. technology such as GPT creates new content based on existing data sets and can supposedly learn over time. GPT may not have emotional intelligence or human consciousness, but it can be difficult to understand how algorithms—and the human brain—come to the conclusions they do.

Earlier this month, OpenAI released GPT-4o and announced this week that it has “recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI [artificial general intelligence].”

As OpenAI continues its iterative deployment, safety remains a primary concern—particularly as the company recently disbanded its previous safety team, led by former chief scientist Ilya Sutskever, and created a new safety team led by Altman himself. Earlier this week, former OpenAI board members Helen Toner and Tasha McCauley published a joint opinion piece in the Economist on this decision, writing, “We believe that self-governance cannot reliably withstand the pressure of profit incentives.” 

Altman reiterated at the summit that the formation of a new safety and security committee is to help OpenAI get ready for the next model. “If we are right that the trajectory of improvement is going to remain steep,” then figuring out what structures and policies that companies and countries should put into place with a long-term perspective is paramount, Altman said.

“It does seem to me that the more we can understand what’s happening in these models, the better,” he added, but admitted OpenAI is not there yet. “I think that can be part of this cohesive package to how we can make and verify safety claims.”

Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress





Source

Related Articles

Back to top button