AI

OpenAI, Google DeepMind Employees Voice Concerns About AI


A group of current and former employees from OpenAI and Google DeepMind have reportedly signed a public letter calling for protection from retaliation when sharing concerns about the “serious risks” associated with the artificial intelligence (AI) these companies are developing.

The letter highlights a lack of effective government oversight and the broad confidentiality agreements that prevent employees from voicing their concerns, Bloomberg reported Tuesday (June 4).

In the letter, titled “A Right to Warn about Advanced Artificial Intelligence,” the employees express worry about the lack of effective oversight of leading AI companies, stating that these corporations have strong financial incentives to avoid proper scrutiny, according to the report.

They argue that while ordinary whistleblower protections focus on illegal activities, many of the risks associated with AI technologies are not yet regulated, the report said. This gap leaves employees as one of the few groups capable of holding these companies accountable to the public.

The letter highlights the issue of broad confidentiality agreements that prevent employees from voicing their concerns, per the report.

OpenAI has faced controversy recently regarding its approach to safeguarding AI, including dissolving one of its safety teams and experiencing a series of staff departures, according to the report. OpenAI employees have raised concerns about non-disparagement agreements tied to their equity in the company, potentially limiting their ability to speak out against the AI startup.

OpenAI has since announced that it will release past employees from these agreements, the report said.

In their letter, the employees propose several measures to address their concerns, per the report. They call for AI companies to prohibit non-disparagement agreements related to risk-related concerns and to establish a verifiably anonymous process for staff to raise issues with the company’s boards and regulators. The employees also urge companies to refrain from retaliating against current and former employees who publicly share information about risks after exhausting internal processes.

In a statement to Bloomberg, OpenAI expressed pride in its track record of providing capable and safe AI systems. The company acknowledged the significance of rigorous debate and stated its commitment to engaging with governments, civil society and other communities worldwide, according to the report. OpenAI holds regular Q&A sessions with the board and offers an anonymous “integrity hotline” for employees and contractors to voice concerns.



Source

Related Articles

Back to top button