Skip to content
woman looking at phone

Exiting Musk’s Twitter Has Compromised Nuclear Communication Channels

As “X” continues to morph and crumble, the importance of remaining nuclear experts to reach and inform an uncertain public only grows.

Words: Clara Sherwood
Pictures: Becca Tapert
Date:

Out of personal protest and my propensity for common sense, “X” will be referred to as “Twitter,” its rightful name. 

Since Elon Musk acquired Twitter in late October 2022, the platform has seen many changes. Musk fired thousands of Twitter employees, removed labels informing users which accounts were associated with foreign governments, mistakenly mislabeled official government accounts, disbanded the trust and safety teams, created a maximum limit on how many tweets users can view each day, and now requires users to pay for a blue verification check mark. His latest change is renaming Twitter with “X,” satisfying his long and odd obsession with the letter. 

While many users might find these changes to be an annoyance, the sporadic remodeling could have even more detrimental effects: Musk’s takeover has the potential to affect nuclear crisis management and risk reduction efforts. Musk has dismantled thoughtfully-placed Twitter safeguards and transformed the platform into an unreliable source of news and information. This has led to a decrease in reliable nuclear expert engagement and communication, giving rise to an important question: How can experts communicate with each other and the public on future and ongoing nuclear crises in a contested communication space rampant with disinformation?

From Reliable Information to Misinformation

Prior to the Musk Era of Twitter, the platform had been used as an effective emergency management tool. Officials have used the platform to quickly communicate with the public about timely issues such as natural disasters and active shooters, relaying information about evacuation and police response. A Washington Post article even found that local, state, and federal officials across the country have said they’ve seen Twitter “save lives and boost civic engagement.” 

In addition to being an important emergency response tool, Twitter has also developed networks of dedicated experts and world leaders who play an important role when it comes to informing the public about crises and foreign policy issues. While there might be some doubt that expert commentary is viewed as credible through social media, an MIT working paper on Twitter and crisis signaling has found that people do not perceive tweeted threats as “cheaper talk” than threats issued through official or traditional communication channels. That is to say, Tweets from experts, world leaders, and other officials are taken seriously.

If a verified “Twitter Blue” account tweets that the United States has armed Ukraine with low-yield nuclear weapons and reaches 20 million viewers, would Russia feel pressure to respond hastily?

Since Musk monetized the blue check mark (formerly the identification of verified and authenticated users), disinformation and unverified accounts have proliferated across the platform, reveling in their new wholly sham credibility. The New York Times reported within the first 24 hours of the blue check mark change, at least 11 new accounts were impersonating the Los Angeles Police Department. On a global scale, several Twitter accounts have been actively misrepresenting both sides of the ongoing Sudanese civil war; an account with a purchased a blue check mark reached more than 1.7 million people with a false tweet that Lt. Gen. Mohamed Hamdan, leader of the rebel Rapid Support Forces, was dead. In the corporate sector, another verified (but fake) account pretending to be the pharmaceutical company Eli Lilly tweeted that “insulin is free now,” prompting the real company to respond that this wasn’t true. While it’s unclear if his tweet directly led to change, some credit his impersonation of Eli Lilly with “tanking the stock prices” of insulin, prompting the company to cap insulin prices at $35 after years of exploitative pricing. 

Three years ago, in 2020, Rebecca Hersman (now Director of the Defense Threat Reduction Agency) was already warning that adversaries could use disinformation tactics to “prompt a leader to take action prematurely or, alternatively, to resist a necessary response, despite knowing certain details to be false or incomplete, as a result of increasing domestic political pressure.” In today’s constantly changing security and information environments, this research continues to have important implications. If a verified “Twitter Blue” account tweets that the United States has armed Ukraine with low-yield nuclear weapons and reaches 20 million viewers, would Russia feel pressure to respond hastily?

On Musk’s Twitter, it does not take a nuclear policy expert to imagine how adversaries impersonating global leaders or governments under the guise of a checkmark could complicate risk reduction and crisis management with a single tweet. 

Is Leaving “X” The Solution?  

Leading scholars have already expressed concern regarding nuclear escalation and Twitter. In 2020, Heather Williams and Alexi Drew published an important report about escalation via tweet. The authors find Twitter has the potential “to be a disruptive technology and exacerbate tensions during crises.” They argue that on one side, tweets from government officials have the power to shape the American public narrative positively and enhance transparency regarding US decision-making, minimizing misperceptions from foreign entities. Conversely, tweets have the ability to create confusion and misconception, inadvertently incentivizing adversaries toward escalation. 

The ability of disinformation campaigns to manipulate the public depended on users’ pre-existing skills to discern fact from fiction. With the removal of verified accounts, Musk has eliminated a key anti-disinformation feature once utilized by the public to verify the validity of information. This permits potential adversaries to use sub-conventional tactics such as “complex influence campaigns including disinformation and weaponized social media” to challenge crisis stability. While Williams and Drew conclude that “tweets are unlikely to independently start a crisis,” they do warn that “tweets can enable or accelerate an ongoing crisis, and that American audiences will be disproportionately at risk to manipulation.”

Williams and Drew recommended that Twitter “enforce policies designed to limit platform manipulation and disinformation; actively remove content and accounts engaged in influence operations targeting ongoing crises.” In 2020, these goals seemed more achievable than they do now as Musk continues to actively remove safeguards, giving disinformation a green light. 

In the nuclear policy space, Musk’s changes to the platform have also hollowed out expert discussions, seriously hampering existing communication channels. Under Musk’s Twitter, several important nuclear scholars, who once contributed to lively social media conversations, have left the platform. Of the experts that remain, a majority are not logging on or engaging with the content as they did in the past, and several are skeptical about the future of the platform, voicing apprehension as to where Musk will take the social media company next. 

Promoting specific narratives to suit a country’s agenda and shape the public’s opinion is not a new phenomenon; the speed and actors now involved in the transmission of information are.

This growing absence of expert voices communicating with the public on nuclear issues comes at a particularly bad time. Policymakers, scholars, and international organizations alike are warning that the risk of nuclear weapons use is higher than at any time since the Cold War. Pre-Musk Twitter provided a venue for users to receive credible information directly from nuclear experts, creating transparency in a field historically inaccessible to the public. Now, these open and reliable communication channels connecting experts, governments, and civil society are decaying, complicating potential future risk-reduction efforts.

While many experts are still vocal on the platform, challenging elected US officials’ rhetoric, continuing to provide context to articles, and holding the US government publicly accountable, there should be a more prevalent discourse regarding the implications that this decline in expert voices could have on future crisis communication. Adversaries could certainly look to escalate nuclear tensions by filling this growing void of expertise with strategic disinformation and weaponized social media. Hopefully, the remaining experts on the platform are prepared to respond.

The Necessity of Engagement Remains

The international security landscape continues to change, and the ongoing Russian-Ukrainian War provides us with a glimpse of how future conflict will have both physical and virtual front lines. Russia and pro-Kremlin activists have had a significant impact on the global audience throughout the war, spreading “disinformation, manipulated imagery, forged documents, and targeted propaganda” through social media platforms like Twitter. Russia has used social media to sow mistrust among foreign audiences about the credibility of Ukraine’s government, successfully preventing an international consensus from rallying behind Ukraine.

In addition to promoting false narratives about President Volodymyr Zelenskyy canceling elections, US-funded Ukrainian weapons being used in French riots, and Ukrainian “baby factories” harvesting organs for the black market, Russia has also routinely misinterpreted Zelenskyy’s comments, painting Ukraine as a country with nuclear weapon aspirations and nuclear capabilities. 

Promoting specific narratives to suit a country’s agenda and shape the public’s opinion is not a new phenomenon; the speed and actors now involved in the transmission of information are. When false narratives quickly spread on social media, they can affect traditional escalation pathways by obfuscating the rationality of leaders, the credibility of their statements, and their communication with the public. Theoretically, false digital narratives could even be used to manipulate international consensus and the status quo on future nuclear weapons usage. 

The Russian-Ukrainian war has shown the global community that in such a dense thicket of unreliable news, open communication between nuclear experts, government officials, and the public is crucial. Twitter served as a tool for risk reduction efforts in the past, but now nuclear experts might need to shift gears and find a new communication channel with the public should Musk continue his current antics. 
Since Musk acquired Twitter, nuclear expert engagement has fallen, and crisis communication has become convoluted, marred by disinformation and disinterest in the constantly changing platform. These effects have been amplified by ongoing security crises, like the war in Ukraine. While Twitter could very likely implode entirely in the next few months given so many chaotic changes, it is important that the nuclear policy community continues talking about what future actions can be taken to ensure nuclear risk reduction remains feasible by keeping communication channels with the public open and reliable.

Clara Sherwood

Clara Sherwood is a New Voices on Nuclear Weapons Fellow with the Federation of American Scientists and a graduate student in international affairs at George Washington University. She can be found on LinkedIn here.

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS