A new technique takes advantage of “AI package hallucination” to get ChatGPT to trick developers into downloading malicious code libraries. Helping developers out with coding suggestions and grunt work is one big benefit of the popular AI solution ChatGPT. But bad actors could use the fact that ChatGPT is full of suggestions to their own benefit, researchers believe: They just have to reverse-engineer a malicious code package that ChatGPT is likely to recommend. We’ve only begun exploring the impact of artificial intelligence on the world, but it’s already giving cyberattackers a leg up on the ongoing arms race between security tools and security circumventors. The trick takes advantage of a quirk common to the large language models that power generative AI like ChatGPT: They love to make stuff up. ChatGPT is a predictive bot, so it will come up with the answers that it thinks make sense based on available data. However, this means that it often generates a “hallucination” — a response that seems reasonable but is a completely false claim that falls apart when fact-checked. If a bad actor grills ChatGPT about code libraries (or “packages”) until the AI makes up a fake one, that hacker can then create a real version of the previously non-existant library. Then, the next time that ChatGPT mentions that library to a user, that user might try downloading the only available version… the malicious one. The new scam has been detailed by Vulcan Cyber’s Voyager18 research team, which has a full explanation available on a recent blog post. Once you know about the threat posed by package hallucinations, there’s one simple fix: Double-check everything that ChatGPT tells you before you actually believe it. That said, in this specific case, the Voyager18 team has more suggestions that could help. “There are multiple ways to [vet coding libraries], including checking the creation date, number of downloads, comments (or a lack of comments and stars), and looking at any of the library’s attached notes. If anything looks suspicious, think twice before you install it.” -Voyager18 ChatGPT is a great tool to help you respond to an error message or to create a brand-new code, but developers shouldn’t rely on it for more important coding projects. ChatGPT and other generative AI programs are always confident, but they’re not always right. Everyone who doesn’t realize that is at risk for believing a lie, which could open them up to anything from embarassment to legal trouble to, as the Voyager18 team uncovered, a major coding security breach. When using ChatGPT, we recommend: And, above all, don’t let the “intelligence” part of “artificial intelligence” distract you from the other word in that phrase.How the New ChatGPT Coding Scam Works
How to Stay Safe From AI Package Hallucinations
Tips for Using ChatGPT
Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.
We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co