Exploitation of Generative AI by Terrorist Groups | International Centre for Counter-Terrorism
In the last few years, artificial intelligence (AI) has taken centre stage in public, academic, and political discussions. Generative AI, in particular, produces new content in response to prompts, offering transformative potential across multiple fields such as education, entertainment, healthcare, and scientific research. While generative AI is expanding quickly, it is also leading to several uncertainties in areas such as human rights, existing technologies (for example, biometric systems), and migration, amongst others. Terrorist groups are also growing interested in exploiting this new technology and using it to their advantage.
The use of technology by terrorist groups is not a new topic, as technology has always played a vital role in the work done by terrorist organisations. However, with the latest developments in artificial intelligence and generative AI, some terrorist groups now consider the “benefits” of these technologies. The advantages of generative AI for malicious purposes are notable – from interactive recruitment to developing propaganda and influencing people’s behaviour via social media channels. New technological advances open multiple doors to infinite possibilities that terrorist groups are prepared to exploit and use to their advantage.
This analysis describes some of the risks posed by generative AI when used by terrorist groups through recent examples and the impact of these tools on people’s behaviours—a key area that terrorist organisations aim to influence. Having such a powerful tool as generative AI in their hands, the possibility of exploiting it in multiple ways could rise in the future.
What is AI and Generative AI?
The rapid advancement of technology is shaping our lives and opening new opportunities for the future, both positive and negative. AI has been a keyword across several fields, leading to significant developments in technology and operational systems. According to the OECD, AI “is a general-purpose technology that has the potential to improve the welfare and well-being of people, to contribute to positive, sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges”. The systems are designed to operate at significant levels of automation. However, it is essential to understand that AI has several iterations, one of which is generative AI, which has shown immense potential in diverse fields in recent months.
Generative AI uses machine learning to generate new content, text, images, audio, video, and multifunctional simulations. It works by using large amounts of datasets and parameters inspired by connections within the human brain. The main difference between generative AI and other forms of AI is that generative AI systems can develop new outputs instead of just predicting and categorising like other machine learning systems. Some examples of generative AI applications include:
-
Text-based chatbots or programs designed to simulate conversations with humans, such as Anthropic’s Claude, Bing Chat, ChatGPT, Google Gemini and Llama 2.
-
Image or video generators like the Bing Image Creator, DALL-E 3, Midjourney, and Stable Diffusion.
-
Voice generators, such as Microsoft VALL-E.
With all these programs, interest in using generative AI has increased significantly, not just from states, the private sector and the general public, but also from terrorist organisations who see an opportunity to expand their propaganda and increase their influence worldwide to support their operations. There is a growing concern that new technology, such as generative AI, will play a significant role in the tactics and modus operandi of terrorist organisations. Groups such as Al-Qaeda, ISIS, and Hezbollah have already been active in the use of these technologies.
Research on Terrorist Actors Exploiting Generative AI
The road to using generative AI seems to be in its early stages as more terrorist groups take advantage of this new technology. For the moment, some attention has been devoted to how terrorists may use AI-based technologies in the future, such as AI and interactive recruitment, intensive propaganda, disinformation, hallucination, or war operations. These are individually discussed below.
Propaganda and Disinformation
Propaganda is one of the main tools that terrorist groups use to promote their values and beliefs. With the help of generative AI, propaganda can be more easily spread, and its impact could increase considerably, making it more efficient and tailored to its targets. Propaganda can be used to spread hate speech and radical ideologies. Using synthetic (fake) images, videos, or audio that align with the organisations’ values could help increase the breadth of propaganda produced, intensify messages, and affect people’s attitudes and emotions (e.g. using fake pictures of wounded victims/children to have an emotional impact on the audience). Introducing generative AI in propaganda can generate a false reality, allowing threat actors to sow chaos and disorder through misinformation and disinformation. Uploading specific generated AI content on social media platforms may increase the audience and spread disinformation as the content can be shared by millions of followers quickly. Having this worldwide effect, social media could also be identified as a powerful weapon in today’s disinformation wars, whereby people can no longer rely on what they read or hear. Generative AI has the potential to create several challenges in online communication and media, especially regarding misinformation and disinformation.
The current war in Gaza is an example where terrorists have used generative AI as different images have been leaked on the internet to instigate more violence and increase their propaganda of disinformation. Some pictures of babies or young people who have been injured in the war appeared to be manipulated by using generative AI to create more chaos and disturbing content on the internet. Connected with the same conflict were the images and videos of the IDF soldiers wearing diapers on social media. The content appeared to be generated by a Hamas-affiliated group using generative AI, and it aimed to undermine the Israeli army. The images spread immediately on the internet, increasing misinformation related to the reality that is going on in Gaza.
In recent months, some terrorist organisations have also released guidelines on how to use AI to develop propaganda and disinformation. Examples of the type of content that has started to spread on the internet are the Islamic State publishing a tech support guide on how to securely use generative AI tools in the summer of 2023, a pro-al-Qaeda outlet publishing several posters with images highly likely to have been created using generative AI, and far-right figures producing a ‘guide to memetic warfare’ advising others on how to use AI-generated image tools to create extremist memes. While pro-Islamic State affiliates have used generative AI to translate Arabic-language ISIS propaganda speech/message into Arabic script, Indonesian, and English, al-Qaeda affiliates have also used generative AI for propaganda content.
Through all these methods, terrorists could use generative AI to create hallucinations through diverse strategies, like psychological warfare, false flag operations, or propaganda to influence people’s behaviour. Hallucinations mainly refer to situations where the AI system generates unexpected, unusual, or nonsensical outputs. For example, a generative AI model trained on text data might produce hallucinatory content that seems coherent at first glance but, at a closer look, contains nonsensical sentences or contradictions. Similarly, an AI model trained on image data might generate hallucinatory images that contain improbable combinations of objects or distorted features to affect more people.
One of the biggest concerns of authorities is using these technologies for intensive propaganda and disinformation with the click of a button. When merged with advancements in targeted advertising and other new effective propaganda measures, generative AI forecasts a transformation in the speed, scale, and credibility of terrorism influence operations. This opportunity to influence people through generative AI could lead to a higher capability to influence people’s behaviours and a gap in the possibility of distinguishing between real and fake online content. Through these, terrorist groups can take advantage and share their ideologies worldwide.
Interactive Recruitment
Moving from propaganda to interactive recruitment is just one further step in terrorist use of generative AI. The two can be very well connected as AI-powered chatbots can interact with potential recruits by providing recruiters with tailored information based on their interests and beliefs, thereby making the extremist group’s messages seem more related to their interests. In the recruitment process, a bot can be used to gain the attention of the possible victims by engaging with the person and offering responses to their input. Later, a human being might take over the conversation to engage more personally. On the other hand, the advanced technology, LLMs like ChatGPT, enables terrorist groups to provide a humanlike experience even without the interference of any humans. Applying generative AI has the potential to increase terrorists’ ability to build personal relationships and especially reach lone actors who might be sympathetic to their cause or otherwise have vulnerabilities which can be exploited by intense interaction with such bots.
Connected with the recruitment process is the possibility of the exploitation of social media, where the AI could be used to amplify the content on social media channels and other digital platforms to spread propaganda and recruit followers, from encrypted communication to other quite innovative methods. Since 2019, Hezbollah has been using social media and the internet to impose violence in its regions of influence, especially in Israel. Perhaps the use of technology at that time was not called generative AI, but the techniques were similar: using social media images and videos to recruit Israeli Arabs and West Bank-based Palestinians to attack Israeli targets.
War Applications
In addition to the applications of generative AI discussed above, some elements of AI have been used for hacking and creating weapons, such as drones and self-driven vehicular bombs. AI technology has been utilised in drones in various ways, including autonomous navigation, object detection and recognition, real-time decision-making, and mission planning. This integration enables drones to perform more complex, efficient, and autonomous tasks. Integrating artificial intelligence into drones has changed war tactics, allowing more advanced and strategic applications. Military drones, sometimes referred to as autonomous or “killer robots”, fulfil vital functions in reconnaissance, surveillance, and even combat situations. Uncrewed aerial systems (UAS) have been described as one of the primary terrorist threats by the United Nations Security Council Counter-Terrorism Committee. These drones are remotely piloted, pre-programmed, or controlled airborne tools. Drones are an advantage for terrorists because they are cheap and require minimal training. In the past, terrorists have used drones to attack state military assets, diplomatic places, international trade, and civilian centres. Some of the non-state actors that have deployed drones in combat include Hamas, Hezbollah, and ISIS. The drones are used for intelligence, surveillance, and electronic warfare tactics and help in target points to increase precision and lethality from ground-level systems.
The use of self-driven vehicular bombs has not yet been realised; however, there have been fears from international organisations, such as NATO, that terror groups could use autonomous cars as bombs in the future. Such a vehicle could drive itself into a specific target and set off explosives. The link between artificial intelligence and self-driving cars lies in the role of technology in enabling autonomous driving capabilities. AI is crucial for self-driving cars, processing sensor data, making decisions, and controlling movements for safe navigation without human input. By analysing data from various sensors, like cameras and radar, AI enables cars to perceive surroundings, identify obstacles, and anticipate traffic, ensuring their autonomy and functionality on the road. The development could potentially decrease the need for suicide bombers. There have been rumours that ISIS is working to produce the technology; however, at the moment, nothing has been released.
Another field where terror groups are beginning to use AI is cyberspace. Forest Blizzard, a Russian military intelligence actor, has utilised Large Language Models (LLMs) for its research into satellite and radar technologies relevant to military operations in Ukraine and generic research to support its cyber operations. Crimson Sandstorm, an Iranian threat actor, uses broader behaviours observed in the security community, including social engineering support and assistance in troubleshooting errors.
Vulnerable Populations
Vulnerable populations are frequently targeted by terrorist groups, including children, who can be easily accessed through online channels. Children spend a significant amount of time online, playing video games or watching videos, and can become targets for terrorist content. The novelty of chatbots and speaking with someone new on the other side of the computer can become a possible avenue for recruitment. Children can be naïve and believe in the words of “someone” who understands their needs and would like to offer them the support that they are looking for. There have been reported situations of children acknowledging abuse and seeking support through AI chatbots, giving opportunities for these terrorists to intervene. At the same time, chatbots could also be used for good practices such as education, demonstrating both positive and negative impacts they could provide. However, once they influence children’s behaviour, terrorist groups could start their exploitation through diverse forms such as child sexual exploitation, promoting self-harm, or recruitment to their ideology.
There are additional risks to children’s safety related to chatbots and other forms of conversational AI, which terrorist groups can exploit. Generative AI can permit inappropriate contact with children. Moreover, generative AI has the potential to generate content that is not appropriate for their age, such as violent or sexual content. For example, developers have used the open-source Stable Diffusion model to develop realistic adult pornography. Exploiting generative AI, such as images, videos, or audio, could reproduce a person’s likeness, producing illegal content that could be used further for propaganda and recruitment content. Having this content on the internet, terrorist groups could gain an advantage in their spread of terrorist materials that might influence individuals.
AI and Counterterrorism
While the risks of AI have been outlined above, AI could also play a significant role in helping counterterrorism efforts in various ways. One of its applications is in surveillance and monitoring. AI can analyse live videos to spot suspicious behaviour or objects in public, helping authorities respond quickly to potential threats. Regarding counter-propaganda and de-radicalisation, AI’s tools could automatically detect and remove extremist content from online platforms, curbing the spread of terrorist propaganda. It could support de-radicalisation programs by identifying at-risk individuals and analysing their online behaviour.
Predictive analytics is another area where AI could be used. By analysing patterns in historical data, social media activity, and other intelligence sources, AI could predict potential terrorist attacks. It is able to assess the behaviour of individuals or groups to identify signs of radicalisation and intervene before attacks happen. Regarding data analysis and intelligence gathering, AI integrates large datasets from sources like surveillance footage, digital platforms, and financial transactions to provide comprehensive insights. It uses language processing to understand and analyse communication intercepts and online content for potential threats.
AI could also improve operational efficiency in counterterrorism. It is able to optimise resource allocation for operations, ensuring personnel and equipment are effectively deployed. AI can provide decision-makers with real-time data and recommendations, enhancing the effectiveness of counterterrorism strategies. However, it is essential to remember that any countermeasures implemented must strike a delicate balance between safeguarding societal well-being and preserving the open exchange of ideas and innovation that are hallmarks of democratic societies.
Conclusion
To conclude, terrorist groups’ use of generative AI represents a concerning trend in the evolving landscape of global security. Examining groups like Hezbollah, Hamas, ISIS, and Al-Qaeda reveals that these organisations are increasingly exploring how to leverage advanced technologies to achieve their nefarious objectives. The potential implications of such exploitation are wide-ranging, encompassing the dissemination of propaganda, the creation of sophisticated disinformation campaigns, the production of fake media content to incite violence or spread fear, and even the development of autonomous weapons systems. These implications impact people’s behaviours who are unaware of the sophisticated mechanisms used to create these contents.
As we confront this emerging threat, policymakers, law enforcement agencies, and civil society must collaborate closely to develop robust strategies to counter terrorist entities’ misuse of generative AI. This entails enhancing regulatory frameworks to monitor and control the dissemination of AI technologies, fostering international cooperation to track and disrupt illicit networks, and investing in research and development of AI detection and attribution tools. Moreover, while addressing the security challenges posed by generative AI exploitation, it is essential to uphold fundamental principles of human rights, privacy, and free expression.
Vigilance, adaptability, and collaboration are paramount in the face of evolving threats. By taking appropriate measures to decrease the risks associated with terrorist groups’ exploitation of generative AI, we can create a safer and stronger future for all, where people can distinguish between good content and the one generated by AI’s tools.