BOOK THIS SPACE FOR AD
ARTICLE ADStaff working at the US House Of Representatives have been barred from using Microsoft's Copilot chatbot and AI productivity tools, pending the launch of a version tailored to the needs of government users.
According to documents obtained by Axios, the chief administrative officer (CAO) for the House, Catherine Szpindor, handed down the order and told staff that Copilot is "unauthorized for House use," and that the service would be removed and blocked from all devices.
"The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," the documents read.
Launched in late 2022, Copilot is a collection of free and paid AI services included in an increasing number of Microsoft applications and web services – including GitHub for code generation, Office 365 to automate common tasks, and Redmond's Bing search engine.
The House decision to ban Copilot shouldn't come as much of a surprise, as the AI chatbot is built atop the same models developed by OpenAI to power ChatGPT, and last year the House restricted the use of that tool by staffers.
Fears over data privacy and security, particularly at the government level, have given rise to the concept of sovereign AI – a nation's capacity to develop AI models using its own data and resources.
OpenAI claims its software can clone your voice from 15 seconds of you talking Why Microsoft's Copilot will only kinda run locally on AI PCs for now Microsoft rolls out safety tools for Azure AI. Hint: More models HPE bakes LLMs into Aruba as AI inches closer to network takeoverMicrosoft is working on a government edition of Copilot apps tailored to higher security requirements aimed at assuaging these fears. The House CAO's office will evaluate the government edition of the suite when it becomes available later this year.
Szpindor's fears about data used by AI finding its way to the wrong hands are well-founded: in June 2023 Samsung reportedly leaked its own secrets into ChatGPT on at least three occasions. That's because users' prompts are often used by AI developers to train future iterations of the model.
A month prior to Samsung's data debacle, OpenAI CEO Sam Altman blamed a bug in an open source library for leaking chat histories. The snafu allowed some users to see snippets of others' conversations – not exactly the kind of thing you want to happen with classified documents. ®