Microsoft just open-sourced its tool for assessing the security of AI systems.
What you need to know
- Microsoft recently open-sourced Counterfit, an automation tool for security testing AI systems.
- The tool can be used to assess the security of machine learning systems and AI.
- A Microsoft survey shows that many organizations do not have the right tools in place to secure AI systems.
Earlier this week, Microsoft open-sourced its automation tool for security testing AI systems called Counterfit. The tool can be used to perform security risk assessments of AI and machine learning systems.
Microsoft explains Counterfit in its blog post on open-sourcing it. The company explains that Counterfit was "born out of our own need to assess Microsoft's AI systems for vulnerabilities with the goal of proactively securing AI services." Initially, the tool used attack scripts written to target specific AI models, but it evolved over time through development.
Microsoft regularly uses Counterfit as part of its AI red team operations. The company uses the tool to automate techniques and then pit them against its AI services.
Matilda Rhode, senior cybersecurity research, Airbus, explains why Counterfit getting open-sourced is important:
AI is increasingly used in industry; it is vital to look ahead to securing this technology particularly to understand where feature space attacks can be realized in the problem space. The release of open-source tools from an organization such as Microsoft for security practitioners to evaluate the security of AI systems is both welcome and a clear indication that the industry is taking this problem seriously.
Taking security seriously is an important trend at the moment. Microsoft surveyed 28 organizations, including Fortune 500 companies, governments, non-profits, and small and medium-sized businesses to see what processes are already in place for securing AI systems. The survey showed that 25 out of 28 organizations said that they don't have the right tools in place to secure AI systems.
0 comments:
Post a Comment