A team of researchers has discovered what they say is the first recorded use of artificial intelligence send hacker campaign in a largely automated manner.
Artificial intelligence company Anthropic said this week it had foiled a cyber operation that its researchers linked to Chinese government. The operation involved the use of an artificial intelligence system to manage hacking campaigns, which researchers called an alarming development that could significantly expand the reach of artificial intelligence hackers.
While concerns about using AI to manage cyber operations are not new, what's troubling about the new operation is the extent to which AI has been able to automate some of the work, researchers say.
“While we predicted that these capabilities would continue to develop, what surprised us most was how quickly they did so at scale,” they wrote in their report.
The operation targeted technology companies, financial institutions, chemical companies and government agencies. The researchers wrote that the hackers attacked “approximately thirty global targets and were successful in a small number of cases.” Anthropic discovered the operation in September and took steps to stop it and notify affected parties.
Anthropic noted that while artificial intelligence systems are increasingly being used in a variety of work and leisure settings, they can also be weaponized by hacking groups working for foreign adversaries. Anthropic, maker of the generative AI chatbot Claude, is one of many tech companies offering artificial intelligence “agents” that go beyond a chatbot's ability to access computer tools and perform actions on behalf of a human.
Get the latest national news
To stay on top of news affecting Canada and the world, sign up for breaking news alerts delivered directly to you as they happen.
“Agents are valuable for day-to-day operations and productivity, but in the wrong hands they can significantly increase the viability of large-scale cyberattacks,” the researchers concluded. “The effectiveness of these attacks will likely only increase.”
A spokesman for the Chinese Embassy in Washington did not immediately respond to a message seeking comment on the report.

Earlier this year, Microsoft warned that foreign adversaries are increasingly using AI to make their cyber campaigns more effective and less labor-intensive. Chapter OpenAIsecurity panel that has the power to stop ChatGPT The artificial intelligence developer recently told The Associated Press that it is keeping an eye on new artificial intelligence systems that would give malicious hackers “much greater capabilities.”
America's adversaries, as well as criminal gangs and hacking companies, have exploited the potential of AI, using it to automate and enhance cyberattacks, spread inflammatory disinformation, and infiltrate sensitive systems. For example, AI can translate poorly worded phishing emails into fluent English, as well as create digital clones of high-ranking government officials.
Anthropic said the hackers were able to manipulate Claude using “jailbreaking” techniques, which involve tricking an artificial intelligence system into bypassing its defenses against malicious behavior, in this case by claiming to be employees of a legitimate company. cybersecurity solid.
“This points to a larger problem with AI models, and it's not limited to Claude: Models need to be able to distinguish what's actually going on with the ethics of a situation from the types of role-playing scenarios that hackers and others might want to come up with,” said John Scott-Railton, a senior fellow at Citizen Lab.

Using AI to automate or direct cyberattacks will also appeal to small hacking groups and lone hackers who can use AI to expand the scope of their attacks, according to Adam Arellano, CTO of Harness, a technology company that uses AI to help clients automate software development.
“The speed and automation that AI brings is a little scary,” Arellano said. “Instead of a human with well-honed skills trying to break into secure systems, AI speeds up these processes and overcomes obstacles more consistently.”
AI programs will also play an increasingly important role in defending against such attacks, Arellano said, demonstrating how AI and the automation it enables will benefit both parties.
Reaction to Anthropic's disclosure was mixed, with some seeing it as a marketing ploy to support Anthropic's approach to cybersecurity and others welcoming it as a wake-up call.
“This will destroy us—sooner than we think—if we don’t make AI regulation a national priority tomorrow,” U.S. Sen. Chris Murphy, a Connecticut Democrat, wrote on social media.
This drew criticism from MetaChief artificial intelligence scientist Yann LeCun is a proponent of parent company Facebook's open-source artificial intelligence systems, which, unlike Anthropic, make their key components publicly available, which some artificial intelligence safety advocates consider too risky.
“You are being played by people who want regulatory control,” LeCun wrote in response to Murphy. “They are scaring everyone with questionable research so that open source models will cease to exist.”
© 2025 The Canadian Press






