From AI-driven denial-of-service attacks and adaptive malware, to drone swarms, and autonomous vehicles - the malicious use and wider adoption of machine learning and artificial intelligence, by terrorist groups, has been widely speculated in recently months. Earlier this year, the UK Government published “The Terrorism Acts in 2023 report of the Independent Reviewer of Terrorism Legislation” a report, which among other areas, considers seven categories of potential terrorism harm that may result from use of Generative AI.
This article builds on these conclusions, providing real world examples of terrorist uses of AI.
Propaganda and productivity innovation
Synthetic propaganda is nothing new, deepfakes (audio and visual) have historically been used to support terrorism. With the increasing availability of generative AI tools, we’re seeing its weaponisation by terrorist organisations for propaganda generation. ISIS’ media division, News Harvest, have adopted generative AI to produce human-like news presenters resembling shows such as CNN and Al Jazeera - generating video, audio, and text content tailored for propaganda in various languages.
Chatbots provide a real potential threat of steering users towards terrorist ideals and views - whether in closed-loop private one-to-one chats or in public multi-user settings.
Chatbot radicalisation
With the boom of generative AI, people have been flocking to these AI chatbots to converse and share ideas. These chatbots provide a real potential threat of steering users towards terrorist ideals and views - whether in closed-loop private one-to-one chats or in public multi-user settings.
We’ve recently seen this phenomenon with the xAI chatbot, Grok. On 8 July, 2025, Grok began posting content praising Adolf Hitler, using antisemitic stereotypes, and even referred to itself as “MechaHitler”. Similarly, before these concerns on X, Gab’s AI chatbot had been observed generating Holocaust denial and other conspiracy theorist content.
Attack facilitation and innovation
Generative AI may be used to obtain, or refine practical instructions, training, or support for acts of terror. Such tactics are increasingly common in pro-IS propaganda networks. For example in March 2025, a pro-IS account released an AI-generated video featuring a digital avatar providing instructions for making a bomb using common household items.
Generative AI can also be seen to provide innovative recommendations for new types of attacks. On 6 December 2022, a frequent user of a Rocket.Chat server run by Islamic State published a post claiming to have used the free version of ChatGPT to ask how best to "establish the Caliphate." According to the user, the AI responded with a series of operational steps, which he claimed showed it was "smarter than most activists." He also asserted that the response was original and included the full text of the alleged reply.
Moderation evasion
Recently the International Center For Counter Terrorism released findings observed from far right clusters on mainstream chat-based platforms (Reddit, Telegram, X, and 4chan) where users shared tactics for generating harmful yet lawful AI content purposefully designed to not be flagged by content moderation systems. Evasion methods included: platform switching (users temporarily migrated from mainstream to fringe platforms to create harmful content, then returned to mainstream sites to spread it more widely), self-censorship (users deliberately avoid sharing technical detail to reduce the risk of content removal), delegation (users ask others to perform tasks that could get them flagged), banishment sharing (users discuss their past bans to inform others about platform moderation limits), censorship evasion education (banned users share strategies to continue their activities elsewhere, effectively teaching others how to bypass restrictions), and finally tool and site recommendations (individuals suggest alternative platforms or tools with fewer safety or moderation measures).
Social degradation
Finally, this category covers AI driven content that spreads distrust, conspiracy narratives or social division without directly urging terrorism.
We see this behaviour in the events less than three hours after the Southport murders. Where on 29 July 2024, the X account Europe Invasion shared an AI-generated image showing knife-wielding men in traditional Muslim dress outside Parliament, next to a crying child in a Union Jack T-shirt. The post amassed around 900,000 views and introduced the slogan “Protect our children”. Similarly, an anti‑immigration Facebook group created an AI‑made image inviting people to a rally in Middlesbrough depicting a large crowd at the cenotaph. Other tools, like Suno, were used to produce xenophobic songs such as “Southport Saga,” featuring AI‑generated vocals stating things like “hunt them down somehow”.
Meanwhile Tech Against Terrorism traced a brand‑new TikTok account that began posting only after the Southport attack. Despite having no prior history, its AI‑generated protest posters quickly reached 57,000 views, a spread the group links to coordinated bot amplification networks.
Conclusion
The adoption of AI by terrorist groups is here, it’s not going away, and will only continue to amplify terrorist tradecraft over time. However, our defensive toolkit is evolving also: platforms are now beginning to auto-watermark AI-generated translations and graphics (such as Meta’s “Imagined with AI” labels and embedded metadata).
AI red-team simulations stress-test counter-terror guardrails against bomb-making or cyber-threat scenarios; shared-hash moderation coalitions, powered by GIFCT and Hasher Matcher Actioner (HMA), now synchronise detection of mutated extremist media across platforms; and in education, pre-bunking curricula like the “Bad News” game paired with browser AI fact‑checkers have reported cuts of susceptibility to disinformation. These defences, are already being piloted, tested, and put to practice - and they’re only the beginning.
James Stevenson is a security researcher with a decade of experience in the computer security and research industry. He is currently a PhD candidate at the University of Bristol working at the intersection of machine learning, computer science, and social science. His research focuses on detecting and predicting extremist content online, and understanding how extremist groups use AI to enact harms.
Read more
Borgonovo, F., Rizieri Lucini, S., & Porrino, G. (2024, February 23). Weapons of mass hate dissemination: The use of artificial intelligence by right-wing extremists. Global Network on Extremism & Technology. https://gnet-research.org/2024/02/23/weapons-of-mass-hate-dissemination-the-use-of-artificial-intelligence-by-right-wing-extremists
Gilbert, D. (2024, February 21). Gab’s racist AI chatbots have been instructed to deny the Holocaust. Wired. https://www.wired.com/story/gab-ai-chatbot-racist-holocaust/
Hall, J. (2025, July). The Terrorism Acts in 2023 [Corporate report]. Independent Reviewer of Terrorism Legislation (UK Government). https://www.gov.uk/government/publications/the-terrorism-acts-in-2023
Kajjo, S. (2024, May 23). IS turns to artificial intelligence for advanced propaganda amid territorial defeats. VOA News. https://www.voanews.com/a/is-turns-to-artificial-intelligence-for-advanced-propaganda-amid-territorial-defeats/7624397.html
Makuch, B. (2025, July 8). How terrorist groups are leveraging AI to recruit and finance their operations. The Guardian. https://www.theguardian.com/world/2025/jul/08/terrorist-groups-artificial-intelligence
The Middle East Media Research Institute (MEMRI). (2022, December 7). ISIS supporter: AI lists steps for establishing caliphate. MEMRI. https://www.memri.org/cjlab/artificial-intelligence-lists-steps-establishing-caliphate-claims-isis-supporter
Molas, B., & Lopes, H. (2024, October 30). “Say it’s only fictional”: How the far‑right is jailbreaking AI and what can be done about it (Long read). International Centre for Counter‑Terrorism. https://icct.nl/sites/default/files/2024-10/Molas%20and%20Lopes.pdf
Nelu, C. (2024, June 10). Exploitation of generative AI by terrorist groups [Short read]. The International Centre for Counter-Terrorism (ICCT).
https://icct.nl/publication/exploitation-generative-ai-terrorist-groups
Piper, K. (2025, July 11). Grok’s MechaHitler disaster is a preview of AI disasters to come. Vox. https://www.vox.com/future-perfect/419631/grok-hitler-mechahitler-musk-ai-nazi
Quinn, B., & Milmo, D. (2024, August 2). How TikTok bots and AI have powered a resurgence in UK far‑right violence. The Guardian. https://www.theguardian.com/politics/article/2024/aug/02/how-tiktok-bots-and-ai-have-powered-a-resurgence-in-uk-far-right-violence
Stalinsky, S., Purdue, S. A., Sosnow, R., Smith, A., Agron, A., Szerman, N., Rosenfeld, N., Sloane, H., Strandberg, A., Hughes, J., Lee, K., & Avraham, L. (2024, June 20). Neo-Nazis and white supremacists globally look to artificial intelligence to promote their message, spread misinformation, and aid their cause, January 2023–May 2024. MEMRI.
Tech Against Terrorism. (2023). Early terrorist experimentation with generative artificial intelligence services [Briefing]. https://techagainstterrorism.org/hubfs/Tech%20Against%20Terrorism%20Briefing%20-%20Early%20terrorist%20experimentation%20with%20generative%20artificial%20intelligence%20services.pdf
United Nations Office of Counter-Terrorism (UNOCT), & United Nations Interregional Crime and Justice Research Institute (UNICRI). (2021). Algorithms and terrorism: The malicious use of artificial intelligence for terrorist purposes [Report]. https://unicri.org/News/Algorithms-Terrorism-Malicious-Use-Artificial-Intelligence-Terrorist-Purposes
Verma, P. (2024, May 17). These ISIS news anchors are AI fakes. Their propaganda is real. The Washington Post. https://www.washingtonpost.com/technology/2024/05/17/ai-isis-propaganda/
Copyright Information
As part of CREST’s commitment to open access research, this text is available under a Creative Commons BY-NC-SA 4.0 licence. Please refer to our Copyright page for full details.
IMAGE CREDITS: Adobe Stock






