Militant groups are experimenting with AI, and the risks are expected to grow

WASHINGTON — While the rest of the world rushes to harness the power artificial intelligenceMilitant groups are also experimenting with the technology, even if they aren't quite sure what to do with it.

For extremist organizations AI can be a powerful tool for recruiting new members, creating realistic deepfake images and improving its cyberattacks, national security experts and spy agencies warn.

Last month, someone posted on the pro-Islamic State group's website calling on other IS supporters to make artificial intelligence part of their activities. “One of the best things about artificial intelligence is how easy it is to use,” the user wrote in English.

“Some intelligence agencies are concerned that AI will facilitate recruitment,” the user continued. “So turn their nightmares into reality.”

IS, which seized territory in Iraq and Syria many years ago but is now decentralized alliance of militant factions who share violent ideologies realized years ago that social media could be a powerful tool for recruitment and disinformation, so it's not surprising that the group is testing AI, national security experts say.

For tightly knit, poorly resourced extremist groups—or even individual attackers with an Internet connection—AI can be used to spread propaganda or deepfakes on a large scaleexpanding the scope of its activities and expanding its influence.

“AI really makes things easier for any adversary,” said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of the cybersecurity company ClearVector. “With AI, even a small group that doesn’t have a lot of money can still make an impact.”

Militant groups began using AI as soon as programs like ChatGPT became widely available. In subsequent years, they increasingly used generative artificial intelligence programs to create realistic photos and videos.

When tied to social media algorithms, this fake content can help recruit new believers, confuse or frighten enemies, and spread propaganda on a scale unimaginable just a few years ago.

Such groups circulated false images of the war between Israel and Hamas from two years ago. depicting bloodied, abandoned babies in bombed buildings. These images caused outrage and polarization, while hiding the real horrors of war. Violent groups in the Middle East have used photographs to recruit new members, as have anti-Semitic hate groups in the United States and other countries.

Something similar happened last year after attack claimed by IS affiliate Almost 140 people died at a concert venue in Russia. In the days after the shooting, propaganda videos created by artificial intelligence were widely circulated on discussion forums and social media in a bid to recruit new recruits.

IS also created fake audio recordings of its leaders quoting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activity and studies the development of IS's use of artificial intelligence.

Such groups lag behind China, Russia or Iran and still view more sophisticated use cases for artificial intelligence as “aspirational,” according to Marcus Fowler, a former CIA agent who is now CEO of Darktrace Federal, a cybersecurity firm that works with the federal government.

But the risks are too high to ignore and are likely to grow as the use of cheap and powerful artificial intelligence expands, he said.

Hackers are already using synthetic audio and video for phishing campaignsin which they attempt to impersonate a high-ranking business or government executive in order to gain access to sensitive networks. They may also use AI to write malicious code or automate some aspects of cyberattacks.

Even more worrying is the possibility that militant groups could try to use AI to produce biological or chemical weapons, making up for the lack of technical knowledge. This risk was included in the Department of Homeland Security's updated insider threat assessment. released earlier this year.

“ISIS was early on Twitter and found ways to use social media to its advantage,” Fowler said. “They are always looking for something new to add to their arsenal.”

Lawmakers have put forward several proposals, saying action is urgently needed.

Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said, for example, that the U.S. should make it easier for artificial intelligence developers to share information about how their products are being used by bad actors, be they extremists, criminal hackers or foreign spies.

“Since late 2022, following the public release of ChatGPT, it has become apparent that the same fascination and experimentation with generative artificial intelligence that the public had would also apply to a range of malicious actors,” Warner said.

During a recent hearing on extremist threats, House lawmakers learned that ISIS and al-Qaeda have held training seminars to help their supporters learn how to use AI.

Legislation passed by the US House of Representatives last month would require Homeland Security officials to annually assess the AI ​​risks posed by such groups.

Defending against malicious use of AI is no different than preparing for more traditional attacks, said Rep. August Pflueger, D-Texas, the bill's sponsor.

“Our policies and capabilities must keep pace with the threats of tomorrow,” he said.

Leave a Comment