If you’re reading this, congratulations, you’ve survived the Great Balloon Crisis of 2023. To protect us from future balloons, Congress is addressing the root cause: social media. Since every hysteria comes with the desire to do something, multiple reports cite a re-ignited desire to ban TikTok, a popular app for short-form videos.
Leaving aside what autonomous sensory meridian response or ASMR-laden cooking tutorials and a glut of AI-generated art have to do with spy balloons, critics argue TikTok is a threat because it helps the Chinese government access sensitive information, harm youth, and engage in censorship. Each issue exists in degrees, but the question is: So what?
Facebook regularly experiences major security breaches. Instagram is linked to psychological harm. And censorship accusations toward Twitter are a bipartisan affair. TikTok isn’t worse than the competition, and banning it has significant costs, such as harming consumers and small businesses, provoking retaliation by the Chinese government, and stifling free speech. But this doesn’t mean that TikTok can’t be better regulated. Congress should craft regulations that are universal, limited, and targeted to the issues.
ARGUMENT AGAINST BANNING TIKTOK
Because a Chinese company, ByteDance, owns TikTok, people presume it’s used to steal security-sensitive information. But guilt by association isn’t evidence. A report from the Internet Governance Project at the Georgia Institute of Technology breaks down why. TikTok is a risk if its data contained “insights into systemic US vulnerabilities and if it’s the only way the Chinese government could access such information. Most Americans are (thankfully) not carrying security-sensitive information.
Like Facebook and Snapchat, TikTok collects information about the device, the app, and the user and only collects contact and geolocation data with user permission. The Chinese government can already collect even more sensitive data through cyber-espionage, such as in the Marriott Hotel and Office of Personnel Management data breaches. If someone did post security-sensitive information, the Chinese government wouldn’t need to exfiltrate the data: it could view the public video and collect open-source information on the user.
TikTok isn’t worse than the competition, and banning it has significant costs, such as harming consumers and small businesses, provoking retaliation by the Chinese government, and stifling free speech.
The second claim is that TikTok is bad for mental health. Users report engaging in self-harm. Many need to be on the app even if they don’t want to, which is the definition of addiction. But actually… wait. That’s not TikTok; that’s Instagram. Social media’s harms are well-documented but aren’t limited to one app. Two types of regulations can exist to mitigate harms in this space: regulations to prevent addiction and regulations to protect minors. There are currently none of the former. The latter is mostly limited to the Children’s Online Privacy Protection Act, which restricts the ability of children under 13 to use the Internet. Because of the compliance costs, most companies choose to ban children under 13 entirely. The weak regulations for children and the complete lack of addiction regulation creates serious issues in all social media applications. This problem can be solved with regulations on all platforms to address the addiction-fueled business model.
The last claim is that TikTok engages in censorship. This occurs in two ways, restricting searches and removing posts. An analysis at the University of Toronto shows TikTok lacks the features to engage in server-side search censorship. The app also did not restrict political keyword combinations, such as “Virus Life Beijing,” “Totalitarian Dictatorship,” and “Taiwan Independence.” In theory, the Chinese government could permit these terms but hijack them in favor of propaganda and misinformation. However, my own app shows that the top results for Uyghur, Taiwan, and Xi Jinping includes videos that are critical of the Chinese government. The proliferation of anti-government content raises the bigger question: if TikTok does engage in massive censorship, why does the Chinese government ban it within its own borders in favor of Douyin, the government-compliant alternative?
All policies have costs, and banning TikTok has several. On the consumer-side, it has over 1 billion monthly active users and in the United States. It is also disproportionately popular among women and people of color. At least one assessment says it might be because the app is more centered around sharing instant experiences and fast content as opposed to self-promotion. Banning it would go against the First Amendment-protected rights to free speech and association and do it disproportionately toward historically-marginalized groups. TikTok is also a growing venue for small businesses: 78% of small businesses report a positive return on investment with TikTok ads. It’s now a popular place for cooking tutorials and indie brands, encouraging entrepreneurship and competition. It provides marketing to tourist destinations. A ban hits the bottom line on all these and falls disproportionately on newer businesses that lack exposure, making it bad for both business and the economy.
A ban could also provoke retaliation by the Chinese government. When the Trump administration considered facilitating a purchase of TikTok by Microsoft, the Chinese government’s state-run newspaper, China Daily, decried it as a “smash and grab” and threatened retaliation. If a ban went through, they have several options for retaliation: breaching Cisco as they have allegedly done in the past or coercing companies with a major presence in China, such as Amazon and Apple. US-Chinese tensions are at an all-time high when both nations need to cooperate on issues like climate change and pandemic preparedness. What good would confrontation do when the benefits are so small?
BANNING IS NOT THE SOLUTION
If there was ever a rule that was both good policy and good politics, it would be: “Don’t ban things people enjoy…. unless you really need to.” All three issues require targeted responses. To address national security, Congress should maintain the ban on TikTok for devices used by the federal government.
TikTok isn’t the only platform that’s bad for mental health. Congress should consider bi-partisan regulations, such as mandating limits on scrolling, a ban on targeted advertising aimed at children, and a public online-age verification system. Social media companies would have less of a profit motive to encourage addiction and fewer means to do so. Addressing mental health is too important to be left to one app alone.
To address censorship, Congress should mandate algorithmic transparency. Companies would need to report on the metrics and forums that drive content decisions. These companies normally resist such attempts because their algorithms are part of their competitive advantage, but an industry-wide mandate reduces the free-rider costs. It also shifts the battlefield away from algorithms and toward the best content, shifting the balance to content creators.
Congress can also create rules on finding and tagging state-sponsored propaganda and disinformation. The Department of Justice is already looking into rules on identifying foreign agents on social media. This should include not only identifying agents and their relationship to a foreign entity but finding unregistered foreign agents as well. These rules can promote transparency while empowering Americans to ignore propaganda.
TikTok’s problems are matters of product quality and are best addressed through regulation. Congress should regulate all social media instead of banning one and letting the rest off the hook.