OpenAI Employees Want Protections To Speak Out on ‘Serious Risks’ of AI

Want to read Slashdot from your mobile device? Point it at and keep reading!

OpenAI Employees Want Protections To Speak Out on ‘Serious Risks’ of AI (



from the tussle-continues dept.

A group of current and former employees from OpenAI and Google DeepMind are calling for protection from retaliation for sharing concerns about the “serious risks” of the technologies these and other companies are building. From a report: “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” according to a public letter, which was signed by 13 people who’ve worked at the companies, seven of whom included their names. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

In recent weeks, OpenAI has faced controversy about its approach to safeguarding artificial intelligence after dissolving one of its most high-profile safety teams and being hit by a series of staff departures. OpenAI employees have also raised concerns that staffers were asked to sign nondisparagement agreements tied to their shares in the company, potentially causing them to lose out on lucrative equity deals if they speak out against the AI startup. After some pushback, OpenAI said it would release past employees from the agreements.

Do not underestimate the value of print statements for debugging.
Don’t have aesthetic convulsions when using them, either.


Read More