As AI-powered services like OpenAI’s GPT-3 grow in popularity, they become an increasingly attractive attack vector. Even shielded behind an API, hackers can attempt to reverse-engineer the models underpinning these services or use “adversarial” data to tamper with them. According to Gartner, 30% of all AI cyberattacks in 2022 will leverage these techniques along with data poisoning, which involves injecting bad data into the dataset used to train models to attack AI systems.
As in any industry, fighting security threats is a never-ending task. But Chris Sestito claims that his platform, HiddenLayer, can simplify it for AI-as-a-service vendors by automatically identifying malicious activity against models and responding to attacks.
HiddenLayer today emerged from stealth with $6 million in seed funding from Ten Eleven Ventures, Secure Octane and other investors. Sestito, the former director of threat research at Cylance and VP of engineering at Qualys, co-founded the company several months ago with Tanner Burns and Jim Ballard. Burns and Ballard also worked at Qualys and Cylance and spent time together at BlackBerry, where Ballard was a data curation team lead and Burns was a threat researcher.
“Virtually all enterprise organizations have made significant resource contributions to machine learning to give themselves an advantage — whether that value is in the form of product differentiation, revenue generation, cost savings or efficiencies,” Sestito told TechCrunch in an email interview. “Adversarial machine learning attacks are capable of causing all of the same damage we’ve seen in traditional cyber attacks including exposing customer data and destroying production systems. In fact, at HiddenLayer, we believe we’re not far off from seeing machine learning models ransomed back to their organizations.”
HiddenLayer claims that its technology can defend models from attacks without the need to access any raw data or a vendor’s algorithms. By analyzing model interactions — in other words, the data fed into the model (e.g., a picture of cats) and the predictions that the model outputs (e.g., the caption “cats”) — to spot patterns that could be malicious, HiddenLayer can work “non-invasively” and without prior knowledge of training data, Sestito said.
“Adversarial machine learning attacks are not loud like ransomware — you have to be looking for them to catch them in time,” Sestito said. “HiddenLayer has focused on a research-first approach that will allow us to publish our findings and train the world to be prepared.”
Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, said that it’s unclear whether HiddenLayer is doing anything “truly groundbreaking or new.” (Cook is unaffiliated with HiddenLayer.) Still, he notes that there’s a benefit to what HiddenLayer appears to be doing: trying to package up knowledge about attacks on AI and make them more widely accessible.
“The AI boom is still booming, but a lot of that knowledge about how modern machine learning works and how best to use it is still locked away mostly to people who have specialist knowledge. Memorable examples for me include researchers managing to extract individual pieces of training data from OpenAI’s GPT-2 and GPT-3 systems,” Cook told TechCrunch via email. “When expert knowledge is inaccessible and hard to come by, sometimes all a business really needs is to provide convenient ways to get at it.”
HiddenLayer is currently pre-revenue and doesn’t have customers, although Sestito says that the startup has engaged several “high-profile” design partners. Ultimately, Cook believes its success will depend less on HiddenLayer’s technology and more on whether the threat from attacks is as great as the company claims.
“I don’t know how prevalent attacks on machine learning systems are [at present]. Tricking a spam filter into letting through an email is very different in scale and severity to extracting proprietary data from a large language model,” Cook said.
To his point, it’s difficult to pin down real-world examples of attacks against AI systems. Research into the topic has exploded, with more than 1,500 papers on AI security published in 2019 on the scientific publishing site Arxiv.org, up from 56 in 2016, according to a study from Adversara. But there’s little public reporting on attempts by hackers to, for example, attack commercial facial recognition systems — assuming such attempts are happening in the first place.
Sestito asserts the threat — regardless of its size today — will grow with the AI market, implicitly to the advantage of HiddenLayer. He acknowledges that several startups already offer products designed to make AI systems more robust, including Robust Intelligence, CalypsoAI and Troj.ai. But Sestito claims that HiddenLayer stands alone in its AI-driven detection and response approach.
“PwC believes that AI will be a $15.7 trillion dollar market by 2030. We absolutely have to start defending this technology now,” Sestito said. “Our biggest goal by far is educating the market on this new threat. The commitment to AI and machine learning is relatively new to many organizations and few have been focusing on defending those assets. With any new technology comes new attack vectors; this is the same fight on a new frontier.”
This article was originally published on TechCrunch.com. Read More on their website.