These estimates provide a strong counterargument to exaggerated claims from artificial intelligence companies, many of which are seeking new rounds of venture capital funding, that AI-generated malware is widespread and part of a new paradigm that poses an ongoing threat to traditional defenses.
A typical example is anthropic, which recently reported discovering an attacker who used Claude's LL.M. degree to “develop, market, and distribute multiple ransomware variants, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms.” The company went on to say, “Without Claude's help, they would not have been able to implement or troubleshoot core components of the malware, such as encryption algorithms, anti-analysis techniques, or manipulation of Windows internals.”
Startup ConnectWise recently said that generative AI “lowers the bar of entry for threat actors.” The message quotes separate report from OpenAI, which identified 20 individual attackers using its ChatGPT artificial intelligence engine to develop malware for tasks including identifying vulnerabilities, developing exploit code, and debugging that code. BugCrowd, meanwhile, said that in a survey of self-selected people, “74 percent of hackers agree that AI has made hacking more accessible, opening the door for newcomers to join them.”
In some cases, the authors of such reports note the same limitations as in this article. Google said in a report released Wednesday that in a review of AI tools used to develop code to manage command-and-control channels and obfuscate its operations, “we saw no evidence of successful automation or any breakthrough capabilities.” OpenAI said much the same thing. However, these disclaimers are rarely made prominently and are often downplayed in a frenzy to portray AI malware as a short-term threat.
Google's report contains at least one other useful finding. One attacker who took advantage of the company's Gemini artificial intelligence model was able to bypass its security fences by posing as white-hat hackers conducting research for a game of capture the flag. These competitive exercises are designed to teach and demonstrate effective cyberattack strategies to both participants and spectators.
Such guards are built into all major LLM programs to prevent malicious use, such as cyber attacks and self-harm. Google said it has since better set up countermeasures to counter such tricks.
Ultimately, the AI malware that has emerged to date suggests that it is largely experimental, and the results are underwhelming. These developments are worth watching as they show that AI tools are creating new capabilities that were previously unknown. For now, however, the biggest threats continue to rely primarily on old-fashioned tactics.






