BOOK THIS SPACE FOR AD
ARTICLE ADSafety is one of the biggest concerns regarding the rapid growth of generative artificial intelligence models. A seven-page complaint filed by whistleblowers and obtained by The Washington Post regarding OpenAI's safety practices has only heightened these apprehensions. As a result, OpenAI is now sharing an update on its safety initiatives with the public.
Also: Fight for global AI lead may boil down to governance
On Tuesday, OpenAI took to X, formerly Twitter, to share a roundup of updates on "how we're prioritizing safety in our work." The thread included highlights of recent initiatives and updates of current projects.
Making sure AI can benefit everyone starts with building AI that is helpful and safe. We want to share some updates on how we’re prioritizing safety in our work.
— OpenAI (@OpenAI) July 23, 2024OpenAI began by highlighting its Preparedness Framework, first released in beta in December. The Framework includes various precautions OpenAI takes to ensure the safety of its frontier AI models, such as giving model scorecards on different metrics and not releasing a new model if it crosses a "medium" risk threshold.
In tandem, OpenAI shared that it is actively developing "levels" to help OpenAI and stakeholders categorize and track AI progress, which the company will share more details about "soon."
OpenAI also offered updates on its Safety and Security Committee, initially launched by OpenAI's board of directors in May to add another layer of checks and balances to its operations. The review conducted by the committee, which includes technical and policy experts, is now underway. Once it concludes, OpenAI will share further steps it plans to take.
Lastly, OpenAI brought attention to its whistleblower policy, which "protects employees' rights to make protected disclosures," according to the company. To promote conversations regarding the technology, the company also changed its employees' departure process, removing non-disparagement terms.
Also: Apple accelerates AI efforts: Here's what its new models can do
These updates were shared just one day after US lawmakers demanded OpenAI share data regarding its safety practices following the whistleblower report. Some of the report's major points included that OpenAI prevented staff from alerting proper authorities regarding technology risks and made employees waive their federal rights to whistleblower compensation, both of which were addressed in the X post.