Skip to content
misinformation, disinformation, social media, smoke, fire, trees

Fighting The Inferno of Online Misinformation

The digital infrastructure that generated networked conspiracies like QAnon persists as the primary hub of misinformation at scale.

Words: Brad Honigberg
Pictures: Joanne Francis
Date:

Despite the growing real-world threat posed by networked conspiracies, the digital infrastructure that has facilitated misinformation at scale remains fundamentally unchanged. Social media platforms serve as vectors for the mass manufacture, dissemination, and consumption of conspiracy beliefs, bringing like-minded individuals together and allowing them to coordinate actions in the real world. The Internet’s economic model — grounded in harnessing attention and ever-expanding scale — has transformed the traditional American “marketplace of ideas” into a virtualized circus in which “conspiracy” and “theory” decouple from one another.

No online movement exemplifies this phenomenon better than QAnon. The sheer vastness of QAnon-tangential beliefs and the community’s malleability make it impossible to accurately define: It offers a “big tent” of apophenic speculation ranging from New Age spiritualism to age-old anti-Semitism to violent apocalypticism. While the movement’s loose philosophy enables followers to construct their own personalized versions of reality, each subscribes to a master narrative that the world is not as it seems and dark forces are pulling the strings. The movement has evolved into a fringe, yet powerful force in American politics and has been tied to acts of violence, murder, and terrorism.

Politically motivated lies and grand conspiracies have been exploited for centuries, yet networked conspiracies like QAnon could only arise through a digital infrastructure that fosters misinformation at scale. Technology corporations didn’t envision conspiratorial communities like QAnon thriving on their platforms, but they should have, as these movements simply leverage their platforms’ tools exactly as they were intended to be used. After all, on the Internet, everything open is eventually exploited. Without a fundamental shift in the way in which these companies operate, non-reality-based movements will continue to proliferate online and play a larger role in our societal discourse.

THE ORIGINS OF THE MISINFORMATION WILDFIRE 

Over the last 25 years, technology corporations have prospered through a combination of lax consumer regulation, buying out competition, eschewing risks, and a laser focus on ever-expanding scale and audience growth. The modern Internet has calcified into a duopoly controlled by Facebook and Alphabet (parent company to Google and YouTube). Through digital advertising, these companies have decimated the journalism industry and subtlety instituted a fundamental alignment in the persuasive interests of social actors with the commercial incentives of the platforms themselves. For technology corporations, decisions about shareholder profits drive innovation, not higher principles like access to quality information, public health, or democracy.

Over time, perpetual information warfare and political extremism, waged by domestic actors and amplified by US adversaries, is every bit as much an existential threat to democracy as any military threat imaginable.

Propelled by algorithms that favor novel content over factual information, anyone with the ability to provoke an emotional response (anger being the most engaging) can make money or achieve influence by grabbing a user’s attention and then selling that attention to others. Such functionality carries an inherent risk because of its obvious potential to lead susceptible individuals into progressively more extreme views. Through repetition, redundancy, responsiveness, and reinforcement, social media’s infrastructure guides users down rabbit holes of misinformation and conspiratorial beliefs to hold their attention as long as possible.

Social media algorithms group people with homogeneous characteristics into buckets and facilitate the distribution of content based on signals from millions of similar people. By harnessing and exploiting the human tendency toward homophily, or love of the same, platforms pre-selectively personalize user newsfeeds to form a feedback loop of self-affirmation and familiarity. As netizens respond positively to certain messages, the newsfeed algorithm ensures that they see more of them. This sends ripples to their extended network which eventually reverberates back to the originator. The more often you hear a claim, the less likely you are to assess it critically; the less you interact with diverse viewpoints, the more likely you will adopt more extreme ideological positions. Eventually, individuals come to believe that only their siloed community possesses the real truth and that all others are ignorant or potentially evil. While algorithmic “filter bubbles” are not limited to a specific political persuasion, their influence on far-right audiences (amplified by disinformation espoused by conservative media and elite narrative framing) has been far more propagandistic and radicalizing.

Amidst the cascade of chaos induced by the COVID-19 pandemic, QAnon communities delivered the kind of digital engagement that social networks prize. For example, Facebook’s algorithmic prioritization of groups exposed millions of users to QAnon-related conspiracies and helped them form bonds. This phenomenon intensified sharply during the coronavirus pandemic, making it a dangerous vector of public health misinformation. According to the Wall Street Journal, the average membership in ten large public QAnon Facebook groups swelled by nearly 600% from March through July in 2020. An investigation from the Guardian found that Facebook’s QAnon community had over 4 million members by August 2020. Similarly, QAnon influencers gamed YouTube’s recommendation algorithm and formed communities on Twitter by hijacking hashtags. Stemming from the global reach attained by social media platforms and the upheaval wrought by the pandemic, unique QAnon variants have spawned in over 70 countries.

The speed and scale of social media allows hyper-mobilizing radicalized messaging to travel instantaneously. Research from The National Consortium for the Study of Terrorism and Responses to Terrorism has found that, over the last 15 years, the average time span of radicalization in the United States has shrunk from 18 months to seven months. In a post-Jan. 6 world, the ideological creed and commitment to violence espoused by many QAnon adherents is likely to mutate as they coalesce with other online extremist groups to become a stochastic, online mob where anti-government, white supremacist and misogynist ideologies collide.

Social media’s obsession with harnessing attention and ever-expanding scale has fostered hyper-partisanship, sectarianism, and a tribal ontology that dispenses with the burden of explanation, focusing instead on repetition, affirmation, and conformity — making it a potent political weapon with quasi-religious qualities. Recognizing that disinformation can be used as a strategy to acquire power and mobilize loyalty, far-right politicians have sought to leverage “Extremely Online” conspiracy communities as distribution nodes to impose alternative versions of reality. In 2020, nearly 100 congressional candidates embraced some aspects of the QAnon, with two gaining seats in the House. Additionally, a slew of QAnon adherents have gained positions of power at the local level across the United States.

This infrastructure also provides enemies of democracy a limitless attack surface to sow discord, create doubt, and provoke destructive actions. Among other disinformation operations, foreign-based actors have harnessed QAnon narratives to fuel their disinformation campaigns. A report from the Soufan Center found that nearly one-fifth of 166,820 QAnon-related Facebook posts between January 2020 and February 2021 originated from overseas administrators with over 58% of foreign-based QAnon posts coming from administrators in China — more than double that from Russian administrators. Within America’s conspiratorial wildfire, shared institutions, apolitical courts, foreign policy that stops “at the water’s edge,” and even agreeing upon the outcomes of elections become impossible. Over time, perpetual information warfare and political extremism, waged by domestic actors and amplified by US adversaries, is every bit as much an existential threat to democracy as any military threat imaginable.

The Internet’s business model has fostered a singularity of conspiracy communities that has conditioned many Americans to believe they can’t trust anything, thus corroding the foundations of democratic governance and objective reality. While past social changes eventually brought out civilizing externalities that emerged through increased social interaction and interdependence, this digital infrastructure has inverted this dynamic, unevenly distributing anti-social behavior and paranoia across society. QAnon is but one of many harsh lessons about allowing misinformation to spread unchecked across social media platforms.

HOW BIG TECH IS FIGHTING THE MASS BLAZE OF MISINFORMATION

Technology corporations have begun taking steps toward assuming greater accountability over public discourse on their platforms. Social networks are increasingly employing artificial intelligence alongside human moderators to flag harmful posts and remove malign disinformation networks. Facebook’s removal of hate speech rose tenfold from 2018 to 2020; Facebook says it shut down 150 networks of fake accounts between 2017 and the end of 2020 — many of them foreign disinformation efforts aimed at influencing Americans, others created in the United States by domestic extremists. YouTube removed 11.4 million videos in the last quarter of 2020, along with 2.1 billion user comments, up from just 166 million comments in the second quarter of 2018. Twitter removed 2.9 million tweets in the second half of 2019, a number that continues to double. The majority of social platforms took significant steps to curb the spread of misinformation in the lead-up to the 2020 US election.

Efforts by mainstream technology corporations to moderate harmful content and deplatform QAnon conspiracists and other online extremists in the wake of the “Stop the Steal” insurrection have yielded results. In its analysis of over 40 million appearances of QAnon-related catchphrases studied between Jan. 1, 2020, to April 1, 2021, the Atlantic Council’s DFRLab shows that the taglines and phrases associated with the movement have all but disappeared from Facebook, Twitter, and Google over the past 4 months. Additionally, stemming from cutoffs from backend service providers like Amazon Web Services and Twilio, right-wing-focused social media platforms, such as Parler and Gab, did not absorb the exodus of QAnon conversations. Whether the volume of QAnon discussion persists in private groups or on encrypted messaging platforms is difficult to determine. More recently, Facebook’s Oversight Board recommended that the platform should not allow former President Donald Trump to return until at least 2023 and that politicians would no longer receive a pass for breaking the company’s hate speech rules.

Yet, these efforts have been reactive, prompted by tragedy, which occurs at an interval after which the conspiratorial content has been widely spread and internalized. A recent poll from the Public Religion Research Institute and the Interfaith Youth Core found that 15% of Americans — a number higher than most major religions — believe the core QAnon tenet that the levers of power are controlled by a cabal of Satan-worshiping pedophiles. The same share said it was true that “American patriots may have to resort to violence” to depose the pedophiles and restore the country’s rightful order. Furthermore, both foreign and domestic disinformationists are growing more creative in their efforts to co-opt legitimate social media users for their influence operations.

STRATEGIES FOR FIGHTING THE MISINFORMATION INFERNO

The entire infrastructure of the Internet needs an overhaul. The best way forward is through a multi-stakeholder approach that fosters increased collaboration among technology corporations, the government, and civil society to balance the benefits of free expression with the need to protect citizens and democratic institutions. To supplement existing moderation policies, technology platforms should pursue a community-based curation process as a proactive course of action to protect and encourage civic participation. This bottom-up approach will involve working with civil society actors such as librarians and journalists to help curate credible news sources over inflammatory, divisive, and sensational content.

By abdicating responsibility for determining what speech is acceptable, the US government has placed the burden upon a small group of private corporations. Mitigating the spread of mis- and disinformation is not at odds with the right to freedom of expression, but rather, it is essential to safeguarding it and giving users access to timely, relevant, accurate information. To protect freedom of speech while curtailing harmful content, legislators should draw sensible boundaries around the protection from civil liability afforded to the social media platforms by Section 230 of the Communications Decency Act of 1996. Congress could propose carve-outs similar to the 2018 FOSTA-SESTA legislative package from Section 230 liability protections so that social media companies could be held liable for specific categories of user-generated disinformation or hateful content.

Regulating modern technology corporations is fundamentally different from the anti-trust legislation of the 1890s which sought to break up price-setting cartels and lower costs for consumers. In today’s case, the products are free. These companies tip toward monopolies when network effects combine with data-hoarding that limits our freedom of choice among social technologies. Rather than attempting to break up the largest platforms as a cure-all, regulatory agendas should focus on enforcing interoperability so that a user’s data from one platform can be instantly rerouted to others. Making data portable will help level the playing field and allow more benign networks to arise.

In order to prioritize freedom of speech over freedom of reach, technology companies must do more to ensure their algorithms serve the public interests.

Contrasting (or in conjunction) with this approach, directly regulating algorithms would mean that technology corporations wouldn’t be liable for each tiny piece of content, but would have legal responsibility for how their products organize, target, and amplify information at scale. Critically, this strategy’s focus on algorithmic transparency would help root out inauthentic behavior; Millions of bot accounts that manipulate discourse in the digital public square should not have the same rights to freedom of speech as humans do. Each day, as AI-driven language modeling technology grows more adept at generating disinformation, distinguishing between real and fake speech online grows more dire.

Increasing algorithmic transparency would also allow users to better understand why they see what they see and help foster curatorial algorithms with a sense of responsibility for what they return. Building reliability metrics and accuracy labels into algorithms could help reduce the spread of misinformation and promote baseline media literacy efforts. Competition could be introduced by offering “middleware” software that allows users to choose algorithms that prioritize higher-quality content. This would allow new actors to take over the gatekeeping function currently dominated by the largest technology companies’ opaque algorithms. Even if middleware encouraged further informational balkanization, that danger is less significant than the one posed by concentrated platform power that currently fosters misinformation at scale.

Algorithms are more difficult to define and control than other entities designed by humans because, like a living organism, they change alongside human behavior. With this in mind, it will be critical for all levels of society to participate in watchdog activities. Apt historical metaphors for explaining this dynamic type of regulation include environmental protections. To improve the ecology of a forest, it isn’t enough to regulate the companies unsustainably chopping down trees. How the forest is used by people and animals must be taken into account as well. These technologies are dependent on everyone’s behavior, so fixing this “wicked problem” will require whole-of-society problem-solving.

Additionally, just as the BBC did in the 1930s to challenge the informational influence of authoritarian leaders and profit-seeking demagogues on the radio, publicly-funded social media ventures should be explored to develop digital platforms that aren’t solely focused on absorbing user data. As the quality of attention garnered by digital advertising continues to decline, alternative economic models could be pursued. The goal shouldn’t be to create the next Facebook; rather, it should be to encourage a more pluralistic vision of cyberspace that better facilitates constructive communication.

WHERE’S THE SILVER BULLET?

There is no silver bullet for solving our current crisis of misinformation at scale and the networked conspiracies it generates. A combination of strategies is required. In order to prioritize freedom of speech over freedom of reach, technology companies must do more to ensure their algorithms serve the public interests. Fundamentally, it should be the values of humanity, not corporate profits, that determine how these platforms influence politics and societal discourse more generally. Only then can America’s democratic “marketplace of ideas” begin to rise from the ashes of the virtualized circus that currently exists.

Brad Honigberg is pursuing a Master’s degree in Security Studies at the Georgetown University Walsh School of Foreign Service. He previously served as Social Media and Outreach Coordinator at the Center for Strategic and International Studies in Washington, DC.

Brad Honigberg

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS