Skip to content
tom-dahm-Tvnfjn4n00I-unsplash

AI Will Make Extremists More Effective, Too

The potential to quickly produce misinformation can have dangerous repercussions.

Words: Jackie Lacroix
Pictures: Tom Dahm
Date:

With the public growth of AI technology, fierce debates are emerging among tech ethicists and policymakers around regulation — who controls the technology, who has access, and what might intervention look like. This discussion is paramount considering the potential harms posed by AI and the less-than-stellar track record of many big tech firms when it comes to mitigating the negative societal impacts of their products.

Andrew Torba, founder and CEO of Gab, has also been worrying about this issue for months. Torba, who is a prominent Christian nationalist and promoter of antisemitic and white supremacist ideas, has highlighted his thoughts on AI several times over the past few months as part of his newsletters to Gab users. In February of this year he wrote, “If our enemies lay claim to the mass adoption of AI they will ensure that dissident Christian voices and the Biblical worldview are stamped out of society for generations to come.” What does the “enemy” look like to Torba? Those in our society who push “CRT [Critical Race Theory] in schools, the anti-White content in entertainment, media, and education, and the anti-Christian cultural content.”

Torba seeks to counter this threat by encouraging Christian nationalists to build and apply AI to red pill the next generation. Torba’s use of the term “red pill” refers to the process through which someone comes to recognize the “truth” of our society via movement towards far-right beliefs. He highlights the “unimaginable” potential of AI to quickly create high quality content at very low cost with the goal of leading “counter-narrative truth operations.”

The Use of AI for Extremist Propaganda and Recruitment

Torba’s call to fellow Christian nationalists illustrates one of the potential risks posed by the widespread public availability of Generative AI (GenAI) tools: their use in the promotion of and recruitment to extreme ideologies.

In 2020, the Center on Terrorism, Extremism and Counterterrorism (CTEC) at the Middlebury Institute of International Studies found that GPT-3, the neural language model developed by OpenAI that same year, had the ability to generate “interactive, informational, and influential” extremist text. CTEC identified several ways in which the use of AI-generated extremist content could bolster the ability of extremists to promote their ideologies and recruit new adherents. As Torba also alludes to, GenAI significantly reduces the amount of time needed to create content. More specifically, CTEC found that GPT-3 could be used to create convincing fake forum threads in the style of extremist communities such as Iron March, a now defunct neo-Nazi site. Such content, the researchers posited, has recruitment potential given its simulation of a robust group identity consisting of multiple (nonexistent) individuals with shared beliefs and a culture of (faked) regular engagement.

GPT-3 could be used to create convincing fake forum threads. Such content, the researchers posited, has recruitment potential given its simulation of a robust group identity consisting of multiple (nonexistent) individuals with shared beliefs and a culture of (faked) regular engagement.

This issue is complicated further by the capability of numerous GenAI tools to respond to user prompts with manufactured evidence and seemingly scientific, though false, data. Scientists testing Meta’s Galactica, intended to aid in writing scientific literature, found that the tool would generate academic-style documents that only individuals with existing deep knowledge on the subject could readily identify as incorrect. Earlier this year, a New York lawyer used ChatGPT to conduct legal research and discovered upon delivery of a court filing that the bot had invented several court cases to demonstrate precedent per the lawyer’s prompt. A violent extremist actor looking to support and promote their beliefs could capitalize on such “hallucinations” to invent convincing “proof” and explanations for hateful beliefs, such as falsified evidence that the Holocaust never happened.

What’s Next?

While many GenAI tools have safeguards in place to prevent the creation of particular types of content, as with content moderation policies on most social media platforms, they’re not particularly difficult to circumvent. And given the rapidity with which many of these tools have been released to the public, both internal community standards and external regulatory efforts are lagging behind. This has resulted in a period of particular vulnerability for the potential exploitation of GenAI by bad actors.

As widely available GenAI tools threaten to become the biggest thing in tech since social media, we as a society risk disregarding the hard-fought lessons of the latter when it comes to the impacts on peace and democracy. We have not yet managed to create an effective system for addressing violent radicalization and the profusion of hate speech and harassment on social media. The spread of publicly available AI tools serves to further complicate these harms and efforts to address them.

Given that AI models generate content from the data that is fed into their systems, addressing the types of harmful content discussed above should start with this data. AI companies should invest time and resources into building robust policies on the types of content to be excluded from their training data. This should take into account the balance between open access to information and potentially harmful applications of this information.

Content such as QAnon conspiracy theories and neo-Nazi talking points may be useful to researchers or national security professionals using GenAI, but it’s rather difficult to see the benefit of an AI system providing this information — without necessary context — to the wider public. Perhaps some data does not merit exclusion, but rather the implementation of algorithms that label or otherwise contextualize this type of content. To this end, regulatory and content moderation decisions should include experts in violent extremism as well as civil rights and social justice advocates with deep knowledge of the impacts of tech-amplified harms.

The time to consider and plan for these negative outcomes is now. With numerous national elections occurring globally in 2024, and the tendency of violent extremists to mobilize around high-profile political events, we ignore the potential exploitation of AI by extremist actors at our own peril. We, as a society, are past the blue sky thinking that new technology won’t reflect the worst impulses of humanity in the same way that it reflects the best.

Jackie Lacroix

Jackie Lacroix is an analyst and program manager with ten years of experience in security analysis and preventing online extremism. She currently leads several US-based projects at an organization focused on preventing online harms including violent extremism and mis- and disinformation. She can be found occasionally on Twitter.

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS