US, UK join forces on AI safety and testing AI models

7 months ago 58
BOOK THIS SPACE FOR AD
ARTICLE AD
world-map3gettyimages-1390203309
da-kuk/Getty Images

The US and UK will work together on the development and safety of artificial intelligence.

The two allies on Tuesday signed an accord that will see both countries collaborate on AI. As part of the agreement, which was signed by US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, the countries will build "suites of evaluations" on both public and private AI models and agents to improve their respective governments' understanding of AI and reduce potential risks.

Also: AI safety and bias: Untangling the complex chain of AI training

"This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society," Raimondo said in a statement. Our partnership makes clear that we aren't running away from these concerns – we're running at them."

Raimondo added that the partnership will also allow both governments to "conduct more robust evaluations" of AI models and paves the way for them to "issue more rigorous guidance" on how the government can -- and perhaps, should -- implement AI.

With AI's rapid rise, it's perhaps no surprise that the US and UK are working quickly to gain a handle on AI and how it could impact the world. Although a component of their work could center on how the governments can protect themselves from outside threats, it may also be used to guide how government agencies themselves engage with AI.

Just last week, the White House announced a new initiative that will require all US federal agencies to have AI safeguards in place by December 1. Those safeguards will be used to "assess, test, and monitor" how AI is being used by government agencies and will be envisioned with public safety in mind. Perhaps most interestingly, if the agencies don't adopt AI safeguards, they won't be allowed to use AI at all.

Also: Just because AI recommends a cliff doesn't mean you have to jump

"Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society, and the public must have confidence that the agencies will protect their rights and safety," the White House said.

In addition to testing models, the US and UK will also share information with each other as they continue to unilaterally research AI and how it's being implemented around the globe. They'll also conduct technical research as part of their combined effort.

"This will work to underpin a common approach to AI safety testing," the agencies said in a statement, "allowing researchers on both sides of the Atlantic -- and around the world -- to coalesce around a common scientific foundation."

Editorial standards
Read Entire Article