Skip to content
misinformation, Section 230, social media

Facebook as a Life-Critical System

How misinformation on social media is impacting our physical and mental health.

Words: Daniel Rogers
Pictures: Yassine Khalfalli
Date:

If our cars worked as well as our computers, we’d all be dead. Could you imagine going 60mph down the interstate, and boom, system failure? It’d give the “blue screen of death” a whole new tragic and more literal meaning.

Contained in this darkly humorous analogy is a deeper question: Why is it that all things digital are so much less reliable and seemingly so much more harmful than other “tech” we experience elsewhere in our lives? Why is it that our cars, our planes, or the myriad other items we use regularly (that are all themselves controlled by computers, by the way) are so much more reliable and so much less likely to kill us than, say, Facebook?

Part of the answer is complexity. It turns out that a car or a plane actually does far fewer things, abstractly speaking, than even the simplest word processor, even if the things it does seem a lot more, shall we say, impactful? And until recently, we didn’t think of social media platforms like Facebook or YouTube as being particularly deadly. It’s only in the past few years that we’ve realized just how dangerous disinformation can be. In other words, what you consume in the media can, in fact, kill you.

HOW SOCIAL MEDIA KILLS 

At the outset of the pandemic, Professor David Yanagizawa-Drott of the University of Zurich and colleagues examined correlations between media diets and health outcomes. In their case, they compared the COVID-19 rates between viewers of Sean Hannity’s show on Fox News, where the risk from the virus was downplayed, to those of Tucker Carlson’s, who took the pandemic more seriously during the first few months of the COVID-19 pandemic. They documented a “significant” and “robust” causal relationship between which show viewers watched and changes to their behaviors, their infection rates, and their ultimate death rates. Their work demonstrates definitively that media diets have had a directly causal relationship with health outcomes in the pandemic. We see now that unvaccinated people are at least 20 times more likely to die of COVID-19 compared to vaccinated people. Yet, disinformation remains the primary reason people still do not get a vaccine.

If Facebook and other platforms were legally liable, they’d have to adjust their algorithms to be less divisive, less addictive, and less harmful. They’d likely make less money, but they’d also stop killing us.

Historically, we’ve never thought of these kinds of media platforms as “life-critical” technologies, but maybe we should start. After all, scholars have shown time and time again that the media we consume can be deadly to ourselves and those around us. And yet, we continue to think of what happens “online” as not being part of the “real world,” capable of the same kind of harm a physical object like a car or a bomb can have.

This is what needs to change. Saying something happens “online” and not “in the real world” is like saying something happened “only in the newspaper.” “Online” is simply the medium through which real humans express real thoughts and feelings (much of the time, at least), and those real thoughts and feelings result in real actions. To say that it’s not part of the “real” world is a blind and antiquated way of looking at things.

Of course, I’m not saying that Facebook is literally in the same technology category as a pacemaker or a car’s airbag, but right now, because of certain legal carve-outs, the gulf of liability between social media platforms like Facebook and life-critical technologies is far, far too wide. In fact, that’s a big part of why our cars are so much safer and more reliable than our computers; car manufacturers are liable for the cars they make and the potential harms they might cause. Automotive safety equipment is regulated. There are entire agencies in the government tasked with overseeing highway safety. And when something goes wrong, automakers are held to account, not only publicly, but legally.

WHY LIABILITY MATTERS

This is what’s missing in tech right now — legal liability for the products these companies create. Facebook’s newsfeed informs nearly half the world’s population. And yet, because of a specific legal carveout created a quarter-century ago, Facebook and platforms like it are immune from any legal liability in their home country. Their algorithms can recommend users join white supremacist groups, amplify conspiracy theories, promote eating disorders, and even inspire genocide, but they remain legally immune. Injured parties try over and over again to hold tech platforms to account for the roles they play in everything from fraud to sex trafficking, and yet these lawsuits keep failing right out of the gate because of this specific law: Section 230 of the Communications Decency Act (CDA). 

There are a few new cases pending related to regulating Facebook, but precedent does not bode well for their prospects. For example, in the 2016 case of Force v. Facebook, families of victims of terrorism perpetrated by terrorist group Hamas attempted to hold Facebook to account for their algorithm’s role in helping Hamas to recruit new followers. The district court, in a decision later upheld by the Federal appeals court and ultimately the US Supreme Court, found that Facebook could not be held liable under CDA Section 230 and dismissed the case, despite the fact that their recommendation algorithm actively aided Hamas in their recruiting efforts.

One could easily argue that the very reason companies like Facebook are so outsized in their financial success is largely because of this weird and unique liability carve-out that doesn’t exist in nearly any other industry. The only other high-profile industry that has these sorts of liability protections in the US is gun manufacturing, and even their protection doesn’t go as far as the law shielding the tech industry. The gun industry can still be sued in certain circumstances, for example, if their products are defective and cause injury or death to those who use them. In fact, those limits recently led to a historic settlement between gun maker Remington and the parents of the victims of the Sandy Hook shooting. If Facebook and other platforms were legally liable, they’d have to adjust their algorithms to be less divisive, less addictive, and less harmful. Sure, they’d likely make a bit less money, but they’d also stop killing us.

SECTION 230 NEEDS AN UPDATE NOW

This is one of the many reasons why Congress must act to update CDA’s Section 230, the law that shields tech platforms from legal liability. There are a number of proposals pending in both houses of Congress. Some focus on exemption carve-outs related to civil rights, sex trafficking, or personal data. Others focus on commercial activities like selling ads. One promising proposal seeks to set the most basic liability standard, called mens rea, in which a platform can be held liable when they knowingly harm their users. Something that simple should be noncontroversial, but Facebook is a powerful lobbying force — in fact, the largest DC lobbying force among all the big tech players — and these days getting enough support in both houses of Congress to do nearly anything remains an elusive goal.

Still, if we are ever going to vaccinate enough people to get out of this pandemic, stem the tide of domestic violent extremism, combat climate change, or tackle any of the other countless crises facing humanity now or in the future, we must reform the toxic information environment currently warping the worldviews of half the planet in the name of ad revenue. It’s time to hold the most influential industry in the world to account.

Daniel J. Rogers is an Adjunct Professor at New York University’s Center for Global Affairs, where he teaches about disinformation and narrative warfare. He is also a Fellow of the Truman National Security Project and the Executive Director of the Global Disinformation Index.

Daniel Rogers

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS