Skip to content
With U.S. Secretary of State John Kerry looking on, U.S. Vice President Joe Biden raises his glass to toast Chinese President & Communist Party general secretary Xi Jinping at a State Luncheon in the Chinese leader's honor at the U.S. Department of State in Washington, D.C., on September 25, 2015. [State Department photo/ Public Domain]

Why the US and China Should Work Together on AI

When it comes to artificial intelligence, Sino-US competition will only get in the way of progress.

Words: Hadley Spadaccini
Pictures: US Department of State
Date:

Following up on a November 2023 meeting between presidents Joe Biden and Xi Jinping, envoys from China and the US recently met in Geneva to discuss the risks of artificial intelligence and the creation of some form of global AI governance. 

This first sign of cooperation is promising, but — like with many aspects of Sino-US relations — US policymakers tend to cast AI development as a zero-sum competition between the US and China. The AI “race” with China inspires anxiety in the US policymaking establishment, especially around whether China will catch up to the US. In actuality, collaboration with China is not only the best way to ensure that the US does not “fall behind” in AI — it may be the only way. 

There are several challenges to collaboration with the Chinese on AI. The first challenge is to overcome zero-sum thinking. The second is to develop mutually created and agreed-upon regulations and guidelines for AI algorithms. Overcoming zero-sum thinking and fostering collaboration between the US and China will ensure continued innovation, while transparent and agreed-upon regulations will mitigate risks associated with AI. Lastly, the third challenge is to establish and support avenues for mutual collaboration by both countries. 

Much of the zero-sum thinking around AI comes from the fact that both China and the US see AI as essential for enhancing their military capabilities, economic competitiveness, and influence on the world stage. These motivations have driven both countries and their businesses to invest heavily in AI research and development.

The US and China are neck-and-neck in cutting-edge AI research, with China leading the way on detailed AI regulations. The Geneva meeting marks a promising step towards global AI governance and collaboration, but the US government needs to realize that action against Chinese AI technology development will impact its own.

AI Collaboration

In 2023, Stanford University’s Institute for Human-Centered Artificial Intelligence published a study that found that Chinese and US researchers collaborated far more than any other two nations on AI research

The global AI market has vital nodes in the US and China, especially at universities and companies where researchers, businesspeople, and engineers are trained to build and implement cutting-edge AI algorithms. To hinder AI development with China would not only be impossible due to the strong people-to-people connections in the field, but also impractical if the US wishes to develop better AI technology.

For example, it is possible to use LLMs to create mass disinformation campaigns to influence elections.

Unfortunately, the US is actively sabotaging US-China AI collaboration with general suspicion about Chinese AI and sanctions. Critics have lambasted China’s AI as “authoritarian” and thus a threat to individual freedoms and economic security, leading to discussions about “de-risking” from Chinese AI. These voices call for countries to exercise heightened caution around Chinese AI, continuously audit and Red Team those technologies, and regulate where and when that AI can be used — essentially echoing the same standards experts say should be used for all AI usage and security. 

Complicating this, AI technology isn’t always immediately obvious as having a “sensitive” application. Large language models (LLMs) like ChatGPT and Llama may not be seen as sensitive in and of themselves, but their application can introduce risk. For example, it is possible to use LLMs to create mass disinformation campaigns to influence elections. The Geneva meeting will hopefully come to consensus on what “sensitive” AI is, but it is always possible that an innocuous AI algorithm will become a threat through some unintended application of that technology.

Zero-Sum Thinking

Another key concern that supports many policymakers’ zero-sum thinking is intellectual property (IP) theft. Luckily, for AI, IP theft is less of a concern. Stealing an AI algorithm would be potentially the easiest step in the process, given the enormous amount of resources needed to train and tune high-tech AI algorithms. Training such AI requires thousands of high-grade graphical processing units (GPUs), which rely upon high-tech semiconductors Chinese businesses cannot easily access. Even then, to do so would be prohibitively expensive. 

As a result, this kind of zero-sum thinking is impractical, with the added consequence of obscuring the many opportunities for collaboration between the US and China. The creation of mutually binding regulations and ethical standards between the US and China can start to break down this zero-sum thinking. 

Ensuring that AI conforms to international law, has human oversight, and doesn’t control the usage of nuclear weapons has been generally agreeable to both American and Chinese parties in previous Track II dialogues. If there is a foundation of mutual trust built upon standards that both countries can agree upon, they can more easily create opportunities for US and Chinese businesses and universities to further collaborate on AI.

Step in Right Direction

Fortunately, the “candid and constructive” Geneva talks between the US and China may represent the first steps towards more open collaboration to create AI that avoids mutually harmful applications. However, this is all empty talk if the US and China do not create avenues to build trust in one another, i.e., enacting more business-friendly data privacy laws in China, abandoning restrictions on technologies relevant to AI, and establishing initiatives in which businesses and universities in both countries can collaborate and invest in AI research and development. 

This is not to say that we should ignore AI’s inherent risks. Rather, the US and China have the opportunity to understand and regulate those risks via collaboration. As stated by Andrew Ng, founder of Google Brain and former Chief Scientist at Baidu, the US excels at being a hotbed for AI innovation and China is adept at bringing AI quickly to market. 

To combine those comparative advantages towards either profit generating enterprises or cutting edge research and development would greatly simplify the process of establishing an international regulatory framework for AI development. Knowledge sharing between the best and brightest of both countries, facilitated by their governments, could contribute to building mutual trust while enhancing relations.

All in all, the Geneva talks are a positive step in the right direction, even if sanctions and other manifestations of zero-sum thinking on AI research and development persist. In order for these talks to be successful, the US needs to stop thinking of Chinese AI as harmful because it is Chinese. Doing so would only lead to negative consequences for the future of American AI and competitiveness.

Hadley Spadaccini

Hadley Spadaccini is a current East Asia intern at the Quincy Institute for Responsible Statecraft with a background in the AI and analytics industry. She received her MA in Asian Studies, focusing on Chinese foreign policy, trade, and econometrics, from the George Washington University.

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS