Three best practices for AI/ML security

Corporations, governments, and academic institutions all understand the immense opportunity that artificial intelligence (AI) and machine learning (ML) present to their constituencies and are increasing their investments. PwC expects the AI market to grow to just under $16 trillion by 2030, or about 12% of global GDP. Given the size of the market and the intellectual property involved, one would think appropriate investments have been made to secure these assets. But that’s wrong.
AI and ML have become the biggest cybersecurity attack vector. The Adversarial AI Incident Database provides thousands of examples of AI attacks across various industries and corporations, including Tesla, Facebook and Microsoft. Yet the cybersecurity industry lags behind the bad guys. There are few dedicated cybersecurity protections.
Gartner outlined the scope of the problem in its October 2022 report, “AI in Organizations: Managing AI Risk Leads to Improved AI Outcomes.” Among the leading findings: this year alone, 30% of AI cyberattacks will have used training data poisoning, AI model theft, or adversarial samples to attack AI-powered systems. Two in five organizations will have experienced an AI security or privacy breach, with one in four being adversarial attacks.
Warnings fall on deaf ears
Watchdogs in the security industry have been ringing alarm bells for years. MITRE’s 2020 Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) framework identified 12 tactics (the “why”) for adversarial attacks and more than 60 specific attack types. It is important to note that the Gartner and MITER research was specific to adversarial AI and ML and not related to what has generally received the most attention in this space: model bias, drift and/or integrity. While these concerns remain very real, Gartner and MITER draw specific attention to cybersecurity risks related to AI and ML.
The current availability of more than 20 free attack tools, including Microsoft’s Counterfit and Meta’s Augly, adds to the potential scale and scope of the cyber risk to AI and ML. These tools are to ML what Metasploit was to servers and networks and just as powerful. ML attacks that took over a month to complete in 2019 take 10 to 15 seconds to complete today.
Hardening AI/ML Security
The implication of continued investment and deployment of machine learning, an accelerated regulatory environment and easy-to-use attack tools means now is the time to understand the organization’s risk and determine the steps needed to ensure how to protect the environment. The MITER ATLAS framework referenced above provides a map that the technique’s attackers still use today to help organizations define testing methodologies for their AI and ML pre-release.
Additionally, in October 2022, the US government’s Office of Science and Technology published the AI Bill of Rights to provide guidance on hardening AI/ML security. The guidance says systems must undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrates they are safe and effective based on their intended use, mitigation of unsafe outcomes, including those outside of intended use, and compliance with domain specific standards.
Best practices
Here are some simple steps companies can take today to assess the organization’s risk profile:
- Proactive threat detection: examine pre-trained and in-house built models before deployment for evidence of tampering, hijacking or abuse.
- Safely evaluate model behavior: models are software. If the teams don’t know where it comes from, don’t run it in the enterprise environment. Carefully inspect models, especially pre-trained models, inside a secure virtual machine before considering them for deployment.
- External security assessment: Understand the organization’s risk level, address blind spots and see where the team can improve. It makes sense to perform an external security assessment of the ML pipeline, given the level of sensitive data that ML models receive.
Heading into 2023, now is a good time to evaluate whether investments in zero trust and defense and depth are being undermined by the risk-uninsured ML models present. By taking a proactive stance, organizations can leverage the potential of AI/ML more effectively.
Abigail Maines, Chief Revenue Officer, HiddenLayer