Microsoft's new open-source tool could stop your AI from getting hacked

3 years ago 141
BOOK THIS SPACE FOR AD
ARTICLE AD

Microsoft has released an open-source tool called Counterfit that helps developers test the security of artificial intelligence (AI) systems.

Microsoft has published the Counterfit project on GitHub and points out that a previous study it conducted found most organizations lack the tools to address adversarial machine learning. 

"This tool was born out of our own need to assess Microsoft's AI systems for vulnerabilities with the goal of proactively securing AI services, in accordance with Microsoft's responsible AI principles and Responsible AI Strategy in Engineering (RAISE) initiative," Microsoft says in a blogpost

SEE: Building the bionic brain (free PDF) (TechRepublic)

Microsoft describes the command line tool as a "generic automation tool to attack multiple AI systems at scale" that Microsoft's red team operations use to test its own AI models. Microsoft is also exploring using Counterfit in the AI development phase. 

The tool can be deployed via Azure Shell from a browser or installed locally in an Anaconda Python environment. 

Microsoft promises the command line tool can assess models hosted in any cloud environment, on-premises, or on edge networks. Counterfit is also model-agnostic and strives to be data-agnostic, applicable to models that use text, images, or generic input. 

"Our tool makes published attack algorithms accessible to the security community and helps to provide an extensible interface from which to build, manage, and launch attacks on AI models," Microsoft notes. 

This tool in part could be used to prevent adversarial machine learning, where an attacker tricks a machine-learning model with manipulative data, such as McAfee's hack on older Tesla's with MobileEye cameras, which tricked them into misreading the speed limit by placing black tape on speed signs. Another example was Microsoft's Tay chatbot disaster, which saw the bot tweeting racist comments.      

Its workflow has also been designed in line with widely used cybersecurity frameworks, such as Metasploit or PowerShell Empire

"The tool comes preloaded with published attack algorithms that can be used to bootstrap red team operations to evade and steal AI models," explains Microsoft. 

The tool can also help with vulnerability scanning AI systems and creating logs to record attacks against a target model. 

SEE: Facial recognition: Don't use it to snoop on how staff are feeling, says watchdog

Microsoft tested Counterfit with several customers, including aerospace giant Airbus, a Microsoft customer developing an AI platform on Azure AI services.  

"AI is increasingly used in industry; it is vital to look ahead to securing this technology particularly to understand where feature space attacks can be realized in the problem space," said Matilda Rhode, a senior cybersecurity researcher at Airbus in a statement.  

"The release of open-source tools from an organization such as Microsoft for security practitioners to evaluate the security of AI systems is both welcome and a clear indication that the industry is taking this problem seriously."

Read Entire Article