In 2010, then Secretary of State Hillary Clinton gave a speech on Internet freedom that reflected one of the State Department’s priorities during her tenure: pioneering and embracing the development of digital diplomacy. With the use of Internet forums and social media, the average person now had potentially far-reaching influence, something few outside of the political world or without celebrity status previously had. Clinton’s State Department, aware of potential weaponization of social media by terrorist groups and autocratic regimes, saw the many benefits of these technologies for civil societies around the world, and rightfully so.
Social media played a pivotal role in organizing protests in Tunisia, Egypt, Libya, Yemen, and others during the Arab Spring. The 2014 Revolution of Dignity in Ukraine also began with calls on Facebook to meet on the Maidan in Kyiv. These same forces, however, have been utilized by governments to identify dissidents and spread propaganda. A 2019 Human Rights Watch report revealed Saudi Arabia was using Twitter to harass and target dissidents. Russia has used social media to push narratives that rewrite 20th century history. Some states have actively worked against Internet freedom, such as China, which has blocked many Western social media platforms within its borders.
In this sense, social media is dual-use tech, and is used for both peaceful and nefarious purposes. The key question is: can social media be regulated effectively? And if so, how?
US REGULATION OF SOCIAL MEDIA
In 2019, the Republican-led Senate Intelligence Committee released a report outlining the use of social media platforms by Russia to influence the 2016 US presidential elections. Posts made by Moscow’s Internet Research Agency (IRA) troll accounts sought to polarize Americans, pushing them to the extremes of their political leanings, as well as harm the chances of candidates that had a hardline approach toward Russia. This included Clinton along with Republicans Marco Rubio, Ted Cruz, and Jeb Bush. The IRA, based in Saint Petersburg, Russia, deploys a team of trolls to attract American audiences on social media platforms with the goal of sowing discord in American society. The report was the culmination of a three-year effort to understand how a foreign entity had co-opted Facebook, Twitter, and other social media platforms to harm American civic society.
While foreign social media influence has gained attention as a national security threat, the hazard posed by domestic peddling of false information is severely understated. The January 6 terror attack on the Capitol was largely planned on social media platforms and was based on unfounded claims of election fraud and driven by incitements of violence, all the way from the president of the United States himself. Posts on websites like Parler, Gab, and MeWe — primarily used by right-wing citizens — along with Twitter and TikTok, called for a descent on the Capitol on January 6 to disrupt the 2020 election certification. Approximately 80 percent of the top posts on TheDonald, a fringe far-right website, featured posts inciting violence. Yet, despite months of this social media-based organizing, Capitol Police were largely unprepared for what took place.
While not the only mode of communication for extremists and propagandists, social media is cheap, accessible, and most importantly, popular platforms for recruitment and the dissemination of information for extremist groups. While the Department of Defense certainly has other means of tracking extremism, it cannot utilize domestic social media information in operations, which can be an obstacle to gaging threats. These restraints on data collection are Cold War legacy laws, such as President Ronald Reagan’s Executive Order 12333, which some have argued are outdated in terms of both procedure and policy.
While government surveillance of social media would set a dangerous precedent, the laissez faire approach to these platforms has resulted in very real national security threats and civic stressors.
After serious pushback from Republicans and Democrats alike, Facebook and Twitter have in recent months increased user oversight on their platforms. According to the Pew Research Center, most Americans believe that social media sites censor political viewpoints. As such, the GOP supports social media regulation to combat claims that they disproportionately sanction conservative voices, while the Democratic Party is more concerned with antitrust reform. Both parties, however, support reforming Section 230 of the 1996 Communications Decency Act, which waives social media legal liability for hosting content that other traditional platforms like print and television could be sued for. This protection, which is ultimately what permits freedom of speech on these private-owned platforms, is unique to the United States and a testament to the value Americans place in expression and community. But it has also allowed these platforms to create — and loosely enforce — their own policies against hate speech, extremism, and abusive content, creating a lack of uniformity that is unhelpful when combating the problem of extremism and misinformation at the meta level.
HOLDING BIG TECH RESPONSIBLE
The power of Big Tech companies has come under public scrutiny repeatedly in recent years, particularly related to American politics. Social media sites have heeded the pressure to hold politicians more accountable regarding their violations of each respective platform’s policies. They have also taken steps to remove trolls and fake accounts, as well as accounts that spread conspiracies and false information. In the first quarter of 2019, Facebook reportedly removed 2.19 million fake accounts. In July 2020, Twitter announced that it had purged roughly 7,000 accounts related to the QAnon conspiracy.
Leading up to the 2020 US general elections, Facebook and Twitter tightened their politics-related policies, including the flagging of posts that contain false information. In the most extreme cases, both have removed accounts they have deemed dangerous, such as former Senior Counselor to the President Steve Bannon’s when he called for the beheading of Dr. Fauci and FBI Director Christopher Wray. These steps, however, have hardly scratched the surface in rectifying the ways in which social media has been utilized to radicalize people, push conspiracies, and harm civic society.
The problem is even worse on fringe platforms. Parler — founded in 2018 as a social media platform to cater to conservatives displeased with mainstream social media companies’ terms of service — has little if any regulation, making it particularly suitable for conspiracies, hate speech, and false information. Indeed, it was instrumental to the ability of the January 6 insurrectionists to organize. On mainstream platforms, false information and extremism can thrive in echo chambers, a result of algorithms that aim to provide a steady stream of preferred content to the user. Private groups and messaging boards also serve as a platform within a platform for organizing around extremism, such as the one many US Customs and Border Protection officers belonged to and used to share abusive content of Representative Alexandria Ocasio-Cortez.
Furthermore, many posts escape judgement entirely. Despite this being demonstrably false information, many related posts are still up on Facebook, without a flag for “disputable information.” For example, a viral post attributing a quote about “feeding Americans small doses of communism” to Soviet Premier Nikita Krushchev is not true, but has been swept into the American political discourse. One of these posts received 89 thousand shares, and only some reiterations of it are flagged as false. A post from Parler, in which a woman with false credentials claimed that Michigan was the victim of massive election fraud, was screenshotted and shared on Facebook in November.
Before the US Capitol siege on January 6, there were almost 1500 QAnon-related posts that openly discussed using violence — and the fact that the use of violence was not enough to shut down the associated accounts is worrisome. That social media platforms flag posts containing verifiably false information as “disputed” when they likely should be removed is similarly concerning. Aside from calls for political violence, pandemic reporting has been especially vulnerable to the proliferation of falsehoods, with posts ranging from “drinking alcohol increases one’s resistance to coronavirus” and “black people are resistant to COVID-19.” Many of these posts have been flagged, but from my own feed, it seems that these banners do little to convince their consumers of their authenticity.
REGULATING THE DIGITAL COMMONS?
While government surveillance of social media would set a dangerous precedent, the laissez faire approach to these platforms has resulted in very real national security threats and civic stressors. The aforementioned Senate Intelligence report recommended information-sharing between Big Tech companies and the government and disclosure regarding the source of political advertisements. While welcome policies, these courses of action are retroactive in nature. There must be a proactive effort to stop the spread of false information and especially calls for violence.
With all its merits, social media poses a serious challenge to truth — unfounded opinions have been platformed and seen as equivalent, if not trumping, fact-driven analysis. This not only blurs the lines between real and fake but hinders one’s ability to convince others that they might have been misled by what they saw on Facebook. The other option for another platform user would be to report the post in hopes that it is removed, which can be a timely process if any action is taken at all.
Propaganda and extremism are not new problems, nor will they go away completely with the cleaning up of social media platforms. This phenomenon speaks to larger, systemic issues in the United States, such as the constraints of the Common Core curriculum and the rhetoric used by politicians and political pundits, issues which will take years, if not decades, to correct for. It is unlikely extremism could even be completely eradicated from social media without actually heavily censoring all posts, which is inefficient, if not impossible, and could result in the accidental suspension of many rule-abiding users.
Many things, however, can be done now that help make social media platforms more unfriendly to extremism and propaganda. The first would be increased information-sharing between these platforms and the government regarding posts and group content of concerning nature, something already recommended by the aforementioned Senate committee report. Other actions could be removing verifiably false information as opposed to simply flagging it, and cultivating an initiative for users to report extremist content — while any user can report posts, how many of us have seen extremist, abusive, or propagandistic posts only to keep scrolling? The bystander effect in cyberbullying situations has been researched extensively, but how a lack of intervention in extremist content sharing might facilitate its spread remains largely unexplored. Nevertheless, the fight against extremism and false information would benefit from Big Tech companies investing more resources into moderating content.
If January 6 was an intelligence failure, it is in part because we have yet to recognize the salience of social media as a tool that can be used in efforts to harm Americans and the national interest. It is problematic that we commonly consider social media to be a public space but hardly hold people responsible for their actions on it. Traditional public spaces like national parks are maintained by an array of government workers and private citizens. The maintenance of the digital commons is simply not happening at the level it needs to be.
Madison L. Sargeant is a senior at Boston University studying international relations and statistical methods.