Skip to content
muhammad-taha-ibrahim-SUYgiqO2wAE-unsplash

Can AI Help Combat Disinformation?

Across the world, AI-powered systems are springing up to aid the fight against disinformation.

Pictures: Muhammad-taha Ibrahim
Date:

Nigerian Fact-Check Elections’ latest innovation is an automated tool that helps analyze whether a claim is true or not. Users are asked to type in the claim on a tab, and the system does the work of analyzing the veracity of the statement. I decided to try it out.

I punch in “has ECOWAS invaded Niger?” (ECOWAS stands for the Economic Community of West African States.)

The most important analysis that happens inside the AI check tool runs on GPT-4, the most recent version of OpenAI’s language model. In real time, the tool scours the web for related articles on the question asked, and then the language model analyzes tens, maybe hundreds of these articles to give a judgement on the question asked. The model is not yet complete, although it is already being beta-tested on the Fact-Check Elections website. 

It first reviews its capacity to review the claim. 

The report pops up right away: “The claim ‘has ECOWAS invaded Niger’ can be verified and check the resources below.” 

It then makes statements under four tabs; the result, the fact check query, sentiment analysis, and a list of credible sources of information on the query.

The result is perhaps the most important part of the statements made. It is a sentence stating if the fact is wrong or right, or indecisive: 

“No statement possible,” it says in the result tab. 

That isn’t all, it goes on to tell me why in the fact check query tab; “As of the moment, there are no reports showing that ECOWAS has invaded Niger. However, please note that the situation in the world can change rapidly.”

The sentiment analysis, I’m told by the guys at Fact-Check Elections, tries to infer the emotional stake of that claim: is it positive or negative? Is it fake news cosplaying as good news or bad news? Does it want to seed unneeded panic or unrealistic hope?

The sentiment analysis for “has ECOWAS invaded Niger?” is neutral because it asks a question and doesn’t assume the response already. Inputting “ECOWAS has attacked Niger” brings a negative sentiment analysis.

The credible sources tab sifts through all the sources on the web and comes back with a list of the most trusted sources of information. 

Olasupo Abideen, the global lead at Fact-Check Elections tells me that at full function, the tool is going to be an authoritative aid to help the public check the veracity of news claims. You wouldn’t have to selectively sift through the web, often filled with similar articles full of the same propagated lies that you’re trying to confirm. You just come to this tool and it confirms or debunks it. The model was recently awarded a winner of the US-West Africa Tech Challenge held in Abidjan, Cote d’Ivoire.

Expanding the AI Fact-Check Toolbox

This isn’t the only tool against disinformation that uses AI. In 2019, a team combining members from Full Fact, Africa Check, and Chequeado won the Google.org AI Impact Challenge for an AI tool that aided newsrooms in fighting disinformation by tracking fake news, cross-referencing claims with fact-checks, and even transcribing and checking live television and radio. When I spoke with David Ajikobi, Nigeria editor for Africa Check in March, he said the tool was used during Nigeria’s election and was a game-changer.

The International Foundation for Electoral Systems (IFES) also has a system that has been deployed in the 2022 Kenyan elections and the recent 2023 Nigerian elections. IFES’s system is slightly different; instead of cross-referencing claims against previous research and sensibilities, the system provides intelligence on the whole information atmosphere. The system is deployed to scour in real time through social media, it then processes thousands of posts on given keywords — often election related — and summarizes what the information atmosphere seems like. “Is this group of people being targeted? What narratives are being peddled? It provides this information for decision makers like the electoral committee,” says Matt Bailey, the global cybersecurity and information integrity advisor for IFES.

When it was deployed in Kenya, the electoral committee used information provided by the model to frame press releases and warnings. “It basically gave a chance to react against narratives before they caused real-life problems,” Bailey explains.

Most of these models run on Large Language Models, LLMs, AI models that analyze language and have also proved very useful to propagating hate speech and disinformation. But its ability to process language effectively — which makes it great for spreading disinformation — also makes it useful in the fight against it.

“When the model is not trained on that data, how does the algorithm know it’s disinformation?”

Kingsley Owadara

LLMs like GPT-4 are basically elaborate text prediction tools. They’re trained on large datasets of conversations, articles, and an unimaginable amount of text scoured through the web. But to predict language is to have a language bias. These models are embedded with arbitrary rules derived mostly by the data they’re fed on and some human specification.

It’s complicated, but here’s a terrible simplification; if the text says “kove,” then it’s very possible the writer meant to write love. what if they wrote “fove”? it might ordinarily translate it also as love because the data the model is trained on is bias to that, but a parameter layered on says that because the letter d is closer to f on the keyboard, then it’s more likely the second word is more likely intended to be dove. That’s two parameters. Now consider more factors and raise it by a hundred, one thousand, a hundred thousand, one million. GPT-4 is rumored to run on a cumulative 1.76 trillion parameters spread across eight models. It’s freely available elder GPT-3.5 runs on 175 billion parameters.

Even that is not enough parameters to equal the complexity of human language. Factors like pop words, Indigenous languages, and use of sarcasm are often blind spots for these language models, according to Kingsley Owadara, a lawyer and AI ethics enthusiast. “When the model is not trained on that data, how does the algorithm know it’s disinformation?”

Ethical AI

Decades of inequality have already predisposed certain voices on the internet more than others, and so the data LLMs are trained on are not reflective of some realities, especially in the Global South. “We are working towards breaking the language and culture barrier,” Abideen tells me. “We need to improve diversity.”

In his essay in The Economist, historian and philosopher Yuval Noah Harari argues that AI’s capacity to effectively comprehend and manipulate the human language is an important shift in history. “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images,” writes Noah. “AI has thereby hacked the operating system of our civilization.”

The end of this, is that those who control AI models have the power to control humans by extension. For example, the folks at IFES have to take precautions so their system does not become a surveillance tool in the hands of bad actors. Already, Blackbird AI uses a similar system to help corporate entities analyze the narratives and reputation they command. 

Abideen tells me that one of the biggest advantages of their system is that it decentralizes access to this tool. Anyone can use it. It’s not limited to licensed newsrooms or electoral commissions or paying corporate clients. This also has a downside in that it  removes any shield between the model and the ordinary person. If the model makes a mistake, there’s no middle-man shielding the public from it. And as already shown, LLMs are really good at blunders. The prioritization of credible sources already makes this model way better. Yet, “ensuring absolute accuracy remains a challenge, and there’s a need for constant adaptation to emerging disinformation tactics,” Abideen says.

Olatunji Olaigbe

Columnist

Olatunji Olaigbe is a Nigerian freelance journalist. He’s a winner of the 2021 IOM West and Central Africa Migration Journalism Awards.

LEARN MORE

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

album-art

Sorry, no results.
Please try another keyword
  • When does something as deeply personal as abortion become a matter of foreign policy? Maybe when it becomes a stand-in for national values and belief systems. Or maybe when it becomes a clever wedge to divide societies. Today, Polish abortion activists are on the cusp of a huge change. After 30 years of some of[...]
00:00

SIGN UP FOR OUR NEWSLETTERS