“The Changing of a Continent” is a column by journalist Kenneth R. Rosen that focuses on the US trans-Atlantic relationship and Europe’s future.
I am a goat. My home is an ark. Press send. My editor asks questions, then posts them online. Misinformation, disinformation, or “fake news”? As a journalist, I’ve got an obligation (moral, professional, social) to refrain from publishing misleading and untrustworthy information. Does saying I’m a hollow-horned ruminant mammal, or that my dwelling was built by a biblical figure sever that trust and obligation by harming readers? Probably not. But what constitutes harm through misinformation and disinformation is hard, and defining those problematic content is harder still.
The European Union’s recent legislation aimed at curbing online content, placing the onus of regulation and vetting on the social media companies, the Digital Services Act, seeks to prevent the circulation of atoms of misinformation and misleading content, specifically online. The act primarily holds companies responsible for removing any content containing child pornography, violence, terrorism-related, or hate speech.
But other areas of harmful content remain murky and sometimes subjective. Taking a study and misinterpreting its findings: Should that be removed from a Facebook post? Are social media companies in the business of education or policing? Are they deceived and uninstructed under trial rather than the creators of falsehoods? Is my being a goat living in an ark parody or a malicious ploy to ends yet unknown?
FIGHTING THE INFORMATION FIGHT
A ground war in Ukraine, the ongoing coronavirus pandemic, and deepening divisions among political parties have given rise to different and all-consuming disinformation and misinformation campaigns in the digital economy as residents from one of Europe’s largest economies head to the polls for national elections. Yet, in the whirlwind of information warfare, deciding who is responsible and how to hold accountable those who disseminate falsehoods and propaganda is something Brussels has failed to achieve even from its position as the global leader of digital age regulatory standardization.
In the whirlwind of information warfare, deciding who is responsible and how to hold accountable those who disseminate falsehoods and propaganda is something Brussels has failed to achieve even from its position as the global leader of digital age regulatory standardization.
After the spectacular collapse of the Italian government mid-summer, Italians return to the polls on Sept. 25, 2022, to cast their ballots in a new general election. Unfortunately, Italian politicians are deeply divided on how to fight digital misinformation by defining it as disinformation, partly because they are the distributors. For years, headlines across Italy have warned of elections misinformation and interference. On Sept. 5, Meta held a training session on Zoom to instruct political parties and candidates on using its platforms, namely Facebook (though the company also owns WhatsApp and Instagram). However, the company’s training is at odds with its own policies, under which “content, including ads, from politicians, is not eligible for review by our third-party fact-checking partners.” This leaves a significant breach in the walls built to prevent “fake news” or misinformation from spreading across networks. These policies further complicate the EU’s efforts to stymie the flow of that content.
While the new laws have their critics, who say that the laws do little to accurately define what content should be marked for deletion or removal, they fail to consider the limitations of those companies legitimizing or amplifying politicians and newsmakers. Moreover, practicing harm-reduction does not always align with a social network’s responsibility to refrain from outright censorship.
Worsening this so-called “infodemic” is a rise in misinformation and disinformation campaigns across the bloc in a perfect confluence of Russia’s invasion of Ukraine in February 2022 and the pandemic. And Italy is ranked among the most disinformation and misinformation-susceptible nations in the bloc.
ADVOCATING FOR MEDIA LITERACY
These woes — defining harmful content, moderating and correcting rather than censoring — can be significantly mitigated by bolstering European education systems to focus on media literacy, which, as taught today, spends little time identifying false information. Using its position as the bellwether of digital sovereignty policies, the European Parliament could learn from the United Kingdom ministerial code, which says that “it is of paramount importance that Ministers give accurate and truthful information to Parliament, correcting any inadvertent error at the earliest opportunity.”
Setting examples in public forums at a nation’s highest chambers can educate the public and anyone faced with the daily struggle of assessing what truths lay buried between deluges of falsehoods.
As for whether I’m a goat, I am not. And I live in a modest apartment. Record corrected, as is my duty.
Kenneth R. Rosen is a columnist at Inkstick and an independent journalist based in Italy.