Skip to content
jr-korpa-_ZNLoTRTEKs-unsplash

Pitting Existential Risks Against Near-Term AI Risks is a False Dichotomy

We can, and should, address both — and from a global perspective.

Words: Heather Ashby
Pictures: JR Korpa
Date:

Since the company OpenAI released ChatGPT to the public at the end of 2022, there has been a surge of interest in Artificial Intelligence (AI), and with it much speculation and analysis about the opportunities and the risks this technology presents.  As a range of entities, from governments to civil society organizations seek to understand the implications of AI advances for the world, there is a growing debate about where governments and multilateral institutions should focus limited resources. Is it more pressing to focus on the macro existential or the more tangible near-term risks posed by AI?

But this framing offers a false dichotomy. The current and potential uses of AI require an approach that considers both existential and near-term risks from the technology. 

Existential Risks

What we term AI existential risks are uses of the technology that could cause catastrophic harm to humanity. Individuals leading the debates and discussions around AI’s potentially catastrophic consequences include Elon Musk, UK Prime Minister Rishi Sunak, and philosopher Dr. Nick Bostrom. Well-funded organizations are also leading the charge on AI existential risks. These groups include Open Philanthropy and the Future of Life Institute among others.

The AI existential risks that these organizations and experts raise range from nuclear weapons to sentient machines. These are issues that films and television have contributed to popularizing, from the 1980s movie “WarGames” to “The Terminator” — the Terminator’s Skynet has become a reference point on what a sentient AI system could become. 

Aside from the Hollywood take on AI over the decades, there are valid concerns about the potential for AI to be used in destructive ways. As AI becomes ever more applied to tools of warfare, it is not inconceivable that the technology would play a role in the most destructive bombs humanity has ever developed. Many experts foresee the use of AI around nuclear weapons including further automating command and control and the decision of when to launch a nuclear strike. Lacking international agreement on AI use in nuclear weapons, particularly involving all countries possessing nuclear arms, makes this an ongoing threat for humanity.

Another AI existential risk consuming experts is the application of the technology in biosecurity. Anthropic CEO Dario Amodei and RAND researchers have expressed concern about how an individual could prompt a response from a large language model (LLM) to develop bioweapons and other nefarious uses. This is an extension of fears of state and non-state actors building and launching biological or chemical weapons against populations.

While AI’s role in nuclear weapons and biotechnology might seem specialized, given the limited number of capable actors, the global impact of these existential risks can be profound. Like nuclear weapons, where the entire world is impacted by the decisions of a few, AI existential risks could have the same impact for humanity. 

Everyday or Near-Term Risks

AI application in various sectors from healthcare to transportation will continue to accelerate. Already we encounter AI when we engage with a chatbot while trying to make a purchase online, receive banking fraud alerts, or when Netflix recommends a movie or TV show based on previous viewing behavior. AI is integrated into varying aspects of our lives, which will elevate the “everyday risks” of the technology.

Experts and organizations that focus on these near-term risks are Joy Buolamwini, Kate Crawford, Safiya Noble, the Algorithmic Justice League, and the Distributed Artificial Intelligence Research Institute among others. Even Vice President Kamala Harris has urged international focus on near-term risks. These risks range from AI powered predictive analytics in criminal justice to the creation of deepfakes. 

How the debate about AI existential and near-risks evolves will significantly contribute to whether AI furthers global inequality and technological access or helps reduce that divide.

There are many pressing near-term risks of AI. The data sets used to train AI models and the developers building the systems have their own biases. In some respects, we can view the data put into AI as a reflection of humanity — with all the good, bad, and horrible. For years technologists have pointed to racial and gender biases in facial recognition technology in addition to the ethical questions on law enforcement’s application of the technology for criminal investigations, which could lead to falsely identifying someone for a crime.

Moreover, the greater availability of AI image generators lowers the cost for the creation of deepfakes. A simple search engine result turns up a multitude of websites purporting the ability to create deepfakes. With the majority of deepfakes representing nonconsensual pornographic images of women, this is an issue that fails to receive the attention it deserves with an actionable remediation strategy from technology companies and governments.

Why Not Both?

Both existential and near-term AI risks have profound global implications. The international community and individual governments need to develop approaches for addressing both instead of arguing that limited resources reduce the ability to focus on existential or near-term risks.

As more governments and businesses across the world use AI, the technology’s impact will widen. For all of the extensive discussions on near-term and existential risks, we still do not fully understand how AI will evolve in different contexts, particularly in other regions of the world or even to advance peace. When expanded to include different parts of the world, AI risks could include surveillance systems that authoritarian governments employ, AI in risk identification in financial services that could hinder greater access to banking for millions, and data collection in conflict areas that could feed into AI systems without regard for privacy. 

The international convenings on AI have mainly involved countries with the means to foster AI development. The G7, the Organization for Economic Cooperation and Development, and the UK-led AI Safety Summit are playing a disproportionate role in shaping discussions on AI. Despite its challenges, the United Nations remains one of the best venues to discuss global governance around technology and AI existential and near-term risks. With its upcoming Summit of the Future in Sept. 2024, the UN could and should play an instrumental role in convening experts, civil society, and governments on identifying AI risks as well as opportunities to apply the technology for good.

In addition to the UN, the Non-Aligned Movement (NAM) should tackle this issue. NAM is the second largest international group behind the UN. Recently Uganda assumed the NAM presidency. NAM has the potential to tackle the uneven development and application of AI as well as inform the international community’s approach to tackling AI existential and near-term risks. Of course, NAM is not without its issues but its ability to bring countries from the Global South together offers an opportunity for the convening to track and develop recommendations on AI risks and how they are impacting regions outside of North America, Europe, and East Asia.

We are at a critical juncture to push for a global governance approach to AI risks that considers the range of issues and will not leave other countries and their citizens behind. Bubbling beneath the surface of this debate is the broader issue of global inequality and whether AI’s advancement will entrench the divide between the haves and the have-nots. How the debate about AI existential and near-risks evolves will significantly contribute to whether AI furthers global inequality and technological access or helps reduce that divide.

Heather Ashby

Heather Ashby is national security and foreign policy professional based in Washington, D.C. 

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS