3 reasons for its success

Microsoft launches an open source tool for developers to test the security of their artificial intelligence




Microsoft has released an open source tool that is created to help developers assess the security of their artificial intelligence systems they are working on. I know called Counterfit project and it's now available on GitHub.


Redmond's own firm already has used Counterfit to test its own AI models, in the red team of the company. In addition, other sections of Microsoft are also exploring the use of the tool in AI development.




Cyber ​​attack simulation with Counterfit




artificial intelligence




According to information from Microsoft on Github, Counterfit has a command line tool and with a generic automation layer to assess the security of machine learning systems.



This allows developers to simulate cyberattacks against AI systems to check security. Anyone can download the tool and deploy it through Azure Shell, to run in the browser, or locally in an Anaconda Python environment.



"Our tool makes published attack algorithms accessible to the security community and it helps provide an extensible interface from which to build, manage, and launch attacks on AI models, "Microsoft said.



The tool comes preloaded with examples of attack algorithms. Additionally, security professionals can also use the built-in cmd2 scripting engine to hook into Counterfit from other existing offensive tools for testing purposes.



According to ITPro, Microsoft developed the tool from the need to assess their own systems for vulnerabilities. Counterfit started out as attack scripts written to attack individual AI models, and gradually evolved into an automation tool to attack multiple systems.



The company claims that has collaborated with several of its partners, clients and government entities to test the AI ​​modeling tool in their own environments.