WASHINGTON — A team of researchers has discovered what they say is the first report of artificial intelligence being used to manage a hacking campaign in a largely automated manner.
Artificial intelligence company Anthropic said this week it had foiled a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to manage hacking campaigns, which researchers called an alarming development that could significantly expand the reach of artificial intelligence hackers.
While concerns about using AI to manage cyber operations are not new, what's troubling about the new operation is the extent to which AI has been able to automate some of the work, researchers say.
“While we predicted that these capabilities would continue to evolve, what surprised us most was how quickly they did so at scale,” they wrote in their report.
The scale of the operation was modest, targeting only about 30 people who worked for technology companies, financial institutions, chemical companies and government agencies. Anthropic noticed the operation in September and took steps to stop it and notify affected parties.
Hackers have “only been successful in a small number of cases,” according to Anthropic, which noted that while artificial intelligence systems are increasingly being used in a variety of work and leisure settings, they can also be used as weapons by hacking groups working for foreign adversaries. Anthropic, maker of the generative AI chatbot Claude, is one of many tech companies offering artificial intelligence “agents” that go beyond a chatbot's ability to access computer tools and perform actions on behalf of a human.
“Agents are valuable for day-to-day operations and productivity, but in the wrong hands they can significantly increase the viability of large-scale cyberattacks,” the researchers concluded. “The effectiveness of these attacks will likely only increase.”
A spokesman for the Chinese Embassy in Washington did not immediately respond to a message seeking comment on the report.
Microsoft warned earlier this year that foreign adversaries are increasingly using AI to make their cyber campaigns more effective and less labor-intensive.
America's adversaries, as well as criminal gangs and hacker companiesharnessed the potential of AI, using it to automate and improve cyberattacks, to spread inflammatory disinformation and penetrate sensitive systems. For example, AI can translate poorly worded phishing emails into fluent English, as well as create digital clones of high-ranking government officials.






