Feature While in a rush to understand, build, and ship AI products, developers and data scientists are being urged to be mindful of security and not fall prey to supply-chain attacks.
There are countless models, libraries, algorithms, pre-built tools, and packages to play with, and progress is relentless. The output of these systems is perhaps another story, though it’s undeniable there is always something new to play with, at least.
Never mind all the excitement, hype, curiosity, and fear of missing out, security can’t be forgotten. If this isn’t a shock to you, fantastic. But a reminder is handy here, especially since machine-learning tech tends to be put together by scientists rather than engineers, at least at the development phase, and while those folks know their way around stuff like neural network architectures, quantization, and next-gen training techniques, infosec understandably may not be their forte.
Pulling together an AI project isn’t that much different from constructing any other piece of software. You’ll typically glue together libraries, packages, training data, models, and custom source code to perform inference tasks. Code components available from public repositories can contain hidden backdoors or data exfiltrators, and pre-built models and datasets can be poisoned to cause apps to behave unexpectedly inappropriately.
In fact, some models can contain malware that is executed if their contents are not safely deserialized. The security of ChatGPT plugins has also come under close scrutiny.
In other words, supply-chain attacks we’ve seen in the software development world can occur in AI land. Bad packages could lead to developers’ workstations being compromised, leading to damaging intrusions into corporate networks, and tampered-with models and training datasets could cause applications to wrongly classify things, offend users, and so on. Backdoored or malware-spiked libraries and models, if incorporated into shipped software, could leave users of those apps open to attack as well.
They’ll solve an interesting mathematical problem and then they’ll deploy it and that’s it. It’s not pen tested, there’s no AI red teaming
In response, cybersecurity and AI startups are emerging specifically to tackle this threat; no doubt established players have an eye on it, too, or so we hope. Machine-learning projects ought to be audited and inspected, tested for security, and evaluated for safety.
“[AI] has grown out of academia. It’s largely been research projects at university or they’ve been small software development projects that have been spun off largely by academics or major companies, and they just don’t have the security inside,” Tom Bonner, VP of research at HiddenLayer, one such security-focused startup, told The Register.
“They’ll solve an interesting mathematical problem using software and then they’ll deploy it and that’s it. It’s not pen tested, there’s no AI red teaming, risk assessments, or a secure development lifecycle. All of a sudden AI and machine learning has really taken off and everybody’s looking to get into it. They’re all going and picking up all the common software packages that have grown out of academia and lo and behold, they’re full of vulnerabilities, full of holes.”
The AI supply chain has numerous points of entry for criminals, who can use things like typosquatting to trick developers into using malicious copies of otherwise legit libraries, allowing the crooks to steal sensitive data and corporate credentials, hijack servers running the code, and more, it’s argued. Software supply-chain defenses should be applied to machine-learning system development, too.
“If you think of a pie chart of how you’re gonna get hacked once you open up an AI department in your company or organization,” Dan McInerney, lead AI security researcher at Protect AI, told The Register, “a tiny fraction of that pie is going to be model input attacks, which is what everyone talks about. And a giant portion is going to be attacking the supply chain – the tools you use to build the model themselves.”
Input attacks being interesting ways that people can break AI software by using.
To illustrate the potential danger, HiddenLayer the other week highlighted what it strongly believes is a security issue with an online service provided by Hugging Face that converts models in the unsafe Pickle format to the more secure Safetensors, also developed by Hugging Face.
Pickle models can contain malware and other arbitrary code that could be silently and unexpectedly executed when deserialized, which is not great. Safetensors was created as a safer alternative: Models using that format should not end up running embedded code when deserialized. For those who don’t know, Hugging Face hosts hundreds of thousands of neural network models, datasets, and bits of code developers can download and use with just a few clicks or commands.
The Safetensors converter runs on Hugging Face infrastructure, and can be instructed to convert a PyTorch Pickle model hosted by Hugging Face to a copy in the Safetensors format. But that online conversion process itself is vulnerable to arbitrary code execution, according to HiddenLayer.
HiddenLayer researchers said they found they could submit a conversion request for a malicious Pickle model containing arbitrary code, and during the transformation process, that code would be executed on Hugging Face’s systems, allowing someone to start messing with the converter bot and its users. If a user converted a malicious model, their Hugging Face token could be exfiltrated by the hidden code, and “we could in effect steal their Hugging Face token, compromise their repository, and view all private repositories, datasets, and models which that user has access to,” HiddenLayer argued.
In addition, we’re told the converter bot’s credentials could be accessed and leaked by code stashed in a Pickle model, allowing someone to masquerade as the bot and open pull requests for changes to other repositories. Those changes could introduce malicious content if accepted. We’ve asked Hugging Face for a response to HiddenLayer’s findings.
How to weaponize LLMs to hijack websites
Google open sources file-identifying Magika AI for malware hunters and others
California proposes government cloud cluster to sift out nasty AI models
OpenAI shuts down China, Russia, Iran, N Korea accounts caught doing naughty things
“Ironically, the conversion service to convert to Safetensors was itself horribly insecure,” HiddenLayer’s Bonner told us. “Given the level of access that conversion bot had to the repositories, it was actually possible to steal the token they use to submit changes through other repositories.
“So in theory, an attacker could have submitted any change to any repository and made it look like it came from Hugging Face, and a security update could have fooled them into accepting it. People would have just had backdoored models or insecure models in their repos and wouldn’t know.”
This is more than a theoretical threat: Devops shop JFrog said it found malicious code hiding in 100 models hosted on Hugging Face.
There are, in truth, various ways to hide harmful payloads of code in models that – depending on the file format – are executed when the neural networks are loaded and parsed, allowing miscreants to gain access to people’s machines. PyTorch and Tensorflow Keras models “pose the highest potential risk of executing malicious code because they are popular model types with known code execution techniques that have been published,” JFrog noted.
Insecure recommendations
Programmers using code-suggesting assistants to develop applications need to be careful too, Bonner warned, or they may end up incorporating insecure code. GitHub Copilot, for example, was trained on open source repositories, and at least 350,000 of them are potentially vulnerable to an old security issue involving Python and tar archives.
Python’s tarfile module, as the name suggests, helps programs unpack tar archives. It is possible to craft a .tar such that when a file within the archive is extracted by the Python module, it will attempt to overwrite an arbitrary file on the user’s file system. This can be exploited to trash settings, replace scripts, and cause other mischief.
ChatGPT creates mostly insecure code, but won’t tell you unless you ask
READ MORE
The flaw was spotted in 2007 and highlighted again in 2022, prompting people to start patching projects to avoid this exploitation. Those security updates may not have made their way into the datasets used to train large language models to program, Bonner lamented. “So if you ask an LLM to go and unpack a tar file right now, it will probably spit you back [the old] vulnerable code.”
Bonner urged the AI community to start implementing supply-chain security practices, such as requiring developers to digitally prove they are who they say they are when making changes to public code repositories, which would reassure folks that new versions of things were produced by legit devs and were not malicious changes. That would require developers to secure whatever they use to authenticate so that someone else can’t masquerade as them.
And all developers, big and small, should conduct security assessments and inspect the tools they use, and pen test their software before it’s deployed.
Trying to beef up security in the AI supply chain is tricky, and with so many tools and models being built and released, it’s difficult to keep up.
Protect AI’s McInerney stressed “that’s kind of the state we’re in right now. There is a lot of low-hanging fruit that exists all over the place. There’s just not enough manpower to look at it all because everything’s moving so fast.” ®
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : The Register – https://go.theregister.com/feed/www.theregister.com/2024/03/17/ai_supply_chain/