Skip to content

The NYT Op-ed Page Just Published an AI Weapons Infomercial

Raj M. Shah and Christopher M. Kirchoff’s essay showcased a deeply flawed vision of emerging technologies in US defense strategy.

Words: William D. Hartung
Pictures: Stéphan Valentin
Date:

The question of how far to go in developing and deploying autonomous weapons controlled by artificial intelligence deserves a serious debate. Unfortunately, the New York Times set back the cause last week by publishing an essay by Raj M. Shah and Christopher M. Kirchoff that hypes AI-driven systems while ignoring the risks involved in going full-speed ahead in deploying these emerging technologies.

There’s no question that Shah and Kirchoff have experience on the issue. They helped build the Pentagon’s Defense Innovation Unit (DIU), which is focused in part on bringing AI into the military. And Shah now runs a venture capital firm that invests in defense tech startups — yet another variation on the revolving door between the Pentagon and the weapons industry. But in parallel to their knowledge of finance and technology, the co-authors of the Times piece have a deeply flawed vision of the potential role of emerging technologies in US defense strategy and military posture going forward.

In lieu of a careful assessment of the technical and strategic implications of shifting towards a new generation of weapons systems, Shah and Kirchoff fall back on the tried and true argument of military techno-enthusiasts — we’re in a “civilizational race” with China to perfect and deploy these technologies. Once this becomes the overriding rationale, any effort to put guardrails around the use of these new technologies goes out the window. The notion of negotiating limits on this technology, which would serve the long-term interests of both nations, is nowhere to be found in Shah and Kirchoff’s essay.

Multiplying Deadly Risks 

There are plenty of reasons to go slow on incorporating AI into next-generation weapons systems.  Christian Brose, the “chief strategy officer” of the rising military tech firm Anduril, has argued that the key to modern warfare is shortening the kill chain — the time between identifying a target and destroying it. Taken to its extreme, this approach would require the use of robotic weaponry that removes humans from the equation. Current Pentagon guidelines promise not to go down this road, but the logic of reducing the kill chain to the shortest possible level could cast that pledge aside as part of a new AI arms race. 

As America’s drone wars of this century have shown, the prospects for civilian casualties based on errant strikes or loose guidelines on who can be targeted are a persistent problem in the use of these purportedly more accurate systems. Robotic weapons would multiply that risk many times over, and could even cause mass slaughter in the event of a malfunction. These systems would also make it easier to go to war given the prospect of fewer casualties on the attacking side.

Nor should the claims that weapons based on emerging technologies will be cheaper, more nimble, and easier to replenish — a claim made repeatedly by deputy secretary of defense Kathleen Hicks in touting the Pentagon’s Replicator initiative — be taken at face value. 

The Costs of “Miracle Weapons”

The history of Pentagon procurement is littered with “miracle weapons” that cost too much, achieved too little, and failed to make a difference on the battlefield. 

From the “electronic battlefield” that was supposed to destroy VietCong supply lines in Vietnam, to Ronald Reagan’s promise of an impenetrable shield against incoming ballistic missiles, the “revolution in military affairs” (RMA) that was supposed to parlay precision-guided munitions and superior communications networks into a decisive military advantage, none of the most celebrated high tech systems of the past five decades have enabled the United States to win wars against seemingly weaker adversaries who relied on low-tech systems like improvised explosive devices. And costs have crept up as well.

The history of Pentagon procurement is littered with ‘miracle weapons’ that cost too much, achieved too little, and failed to make a difference on the battlefield. 

Few people may remember that the F-35 combat aircraft — an overpriced, underperforming system that, at an estimated $2 trillion over its lifetime, is on track to be the most expensive weapons program in the history of the Pentagon — was originally promoted as representing a revolution in military procurement that would produce more capable systems more quickly, at a lower price. 

Twenty-three years later the F-35 is still plagued by performance problems, unable to carry out basic missions and so hard to maintain that it spends inordinate amounts of time on the ground being fixed. A persistent problem with the F-35 has been the difficulty of developing the complex software required to make it work as intended — a problem that will only be more daunting in the case of autonomous weapons.

Public Scrutiny 

Shah and Kirchoff are correct to point out that the Pentagon’s current procurement practices are woefully out of date, and that many of the systems the department purchases are not well-suited to the most likely conflicts of the future. But that doesn’t mean we should race to develop and deploy autonomous systems without adequate discussion and debate.

The possibility that the decisions on how and when to deploy AI-driven systems can be made on the merits are clouded by the fact that there are huge amounts of money to be made by moving as quickly as possible. Venture capital firms have poured tens of billions of dollars into defense tech startups, and firms involved in AI-driven systems have hired dozens of former military and Pentagon officials to help them make their case in Washington. 

Meanwhile, major players in the emerging military tech sector like Peter Thiel of Palantir and Palmer Luckey of Anduril are throwing large sums at Donald Trump and other Republican candidates who they think will give them the freest hand in developing autonomous systems. And as the world now knows, Republican vice presidential candidate JD Vance was employed by Peter Thiel, and Thiel was a major funder of Vance’s successful run for the Senate. Palantir’s influence machine has grown even stronger with the recent hire of former representative Mike Gallagher, who chaired the hyperbolic Congressional committee on the threat posed by the Chinese Communist Party.

Given the risks involved and the special interest advocates mustered behind it, it is essential that the public get access to objective analyses of the consequences of going ahead with an AI-driven military. By publishing a piece by two uncritical cheerleaders for emerging military technologies the New York Times fell down on the job. It needs to do better going forward.

William D. Hartung

William D. Hartung is a senior research fellow at the Quincy Institute for Responsible Statecraft.

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS