OpenAI insiders call for company to be more transparent about the ‘serious risks’ AI technology poises to society
Didem Mente/Anadolu/Getty Images
Current and former OpenAI employees are speaking out about the need for the company and others like it to be more transparent about the technology they’re developing
CNN
—
A group of OpenAI insiders are demanding that artificial intelligence companies be far more transparent about AI’s “serious risks” — and that they protect employees who voice concerns about the technology they’re building.
“AI companies have strong financial incentives to avoid effective oversight,” reads the open letter posted Tuesday signed by current and former employees at AI companies including OpenAI, the creator behind the viral ChatGPT tool.
They also called for AI companies to foster “a culture of open criticism” that welcomes, rather than punishes, people who speak up about their concerns, especially as the law struggles to catch up to the quickly advancing technology.
Companies have acknowledged the “serious risks” posed by AI — from manipulation to a loss of control, known as “singularity,” that could potentially result in human extinction — but they should be be doing more to educate the public about risks and protective measures, the group wrote.
As the law currently stands, the AI employees said, they don’t believe AI companies will share critical information about the technology voluntarily.
It’s essential, then, for current and former employees to speak up — and for companies not to enforce “disparagement” agreements or otherwise retaliate against those who voice risk-related concerns. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the group wrote.
Their letter comes as companies move quickly to implement generative AI tools into their products, while government regulators, companies and consumers grapple with responsible use. Meanwhile many tech experts, researchers and leaders have called for a temporary pause in the AI race, or for the government to step in and create a moratorium.
In response to the letter, OpenAI spokesperson told CNN it is “proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk, adding that the company agrees “rigorous debate is crucial given the significance of this technology.”
OpenAI noted it has an anonymous integrity hotline and a Safety and Security Committee led by members of its board and safety leaders from the company. The company does not sell personal info, build user profiles, or use that data to target anyone or sell anything.
But Daniel Ziegler, one of the organizers behind the letter and an early machine-learning engineer who worked at OpenAI between 2018 and 2021, told CNN that it’s important to remain skeptical of the company’s commitment to transparency.
“It’s really hard to tell from the outside how seriously they’re taking their commitments for safety evaluations and figuring out societal harms, especially as there is such strong commercial pressures to move very quickly,” he said. “It’s really important to have the right culture and processes so that employees can speak out in targeted ways when they have concerns.”
He hopes more professionals in the AI industry will go public with their concerns as a result of the letter.
Meanwhile, Apple is widely expected to announce a partnership with OpenAI at its annual Worldwide Developer Conference to bring generative AI to the iPhone.
“We see generative AI as a key opportunity across our products and believe we have advantages that set us apart there,” Apple CEO Tim Cook said on the company’s most recent earnings call in early May.