As local officials continue to count votes in this year’s general elections, America’s digital ecosystem is ripe for disruptive and potentially dangerous misinformation. Elections always present logistical hurdles, but holding an election in the midst of a pandemic presents new and complex challenges. This year, we witnessed, for instance, unusually high voter turnout, as Americans engaged in early in-person and mail-in voting. These forms of voting, which may have been unfamiliar to many Americans pre-pandemic, were targets of widespread election misinformation. Polarized online spaces will not only amplify and refract these challenges but require that everyday users answer the call to digital citizenship. Social media companies like Twitter and Facebook bear immense responsibility for responding to misinformation outbreaks, but they must not — and cannot — go it alone.
At every level of American political life, the 2020 elections are not only testing social media companies’ ability to police their platforms, but social media users’ capacity and willingness to show restraint online. Both during and after this fraught set of events, every American must embrace their role as a digital citizen by exercising caution and restraint when creating, consuming, and sharing online content.
Against the backdrop of a compounded political, economic, and public health crisis, this election season has unfolded during an already heightened period of uncertainty and stress. Social media platforms have attempted to curb the spread of misinformation from sources foreign and domestic. At home, unsubstantiated rumors of widespread voter fraud have proliferated across platforms, amplified by President Trump and many in his inner circle. Foreign actors, particularly Russia, have picked up on these domestic threads, exploiting these false claims about voter fraud in an attempt to depress voter turnout and undermine public confidence in the election results.
While companies such as Facebook and Twitter play a role in what a user sees through algorithms and other content moderation policies, individual users are the real vanguard of combating misinformation.
Social media companies have sought to act on lessons learned in 2016 by taking steps to prepare for the most predictable challenge facing this year’s elections: a deluge of misinformation. Facebook and Twitter have slapped fact-checking labels on outright falsehoods meant to question the integrity of the election or voting practices. Facebook has taken aim at removing “coordinated inauthentic behavior” meant to influence public debate with fake accounts. Both Twitter and Facebook have prepared for the possibility that election results are delayed. Facebook vows to reject any ads that prematurely claim victory in this year’s elections; Twitter has announced plans to label any tweets “that falsely claim a win for any candidate.”
Laudable as they are, these actions alone cannot offset the threat. Ultimately, responsibility rests with the lifeblood of social media: each of us. Every day, the average social media user views online content at a scale unimaginable to someone living in the United States just twenty years ago.
While companies such as Facebook and Twitter play a role in what a user sees through algorithms and other content moderation policies, individual users are the real vanguard of combating misinformation. Humans are wired to believe what they want to believe. Social media users ultimately choose to generate and share content, true or not, and no technological safeguard erases a user’s duty to practice digital citizenship.
Seeing; believing; sharing: This three-part model, while disarmingly simple, is a useful tool for social media users to self-regulate their behavior online to slow the spread of false information. Individual users bear responsibility to believe or question the information they see. They also have the duty to exercise caution and restraint in amplifying information with a like, share, or retweet. Before sharing anything, users should ask themselves whether they even believe the content in question: an essential threshold for good-faith conversations both online and in person. A recent MIT study indicated that this criterion alone can slow a user’s urge to share information online. Fundamentally, users should not believe everything they see, nor share everything they believe.
If Americans learned anything this year, it is that misinformation thrives in an already polarized society, aggravating existing tensions with potentially violent outcomes. This year’s election is posing an unparalleled test of social media users’ commitment to digital citizenship. All online users can play a role in moderating and shaping the digital space. Exercising caution before sharing information online slows the spread of that potential misinformation to others. Showing restraint is like wearing a digital mask in a crowded online ecosystem. It is the right thing to do. It benefits the collective good. And it has never mattered more.
Megan Lamberth is a research associate for the Technology and National Security Program at the Center for a New American Security (CNAS).
Chris Estep is a communications officer at CNAS.
Martijn Rasser is a senior fellow for the CNAS Technology and National Security Program.