Skip to content
hollywood and artificial intelligence

How Hollywood Skews Our Understanding of AI

Words: Erin Connolly
Pictures: Inanc Avadit

Hollywood is fascinated with the future. Long before Schwarzenegger was offering to “pump up Sacramento,” he was walking around naked and stalking Sarah Connor in The Terminator. Movies have depicted the consequences of evolving technology to captivated audiences for years. While Wall-E made my 13-year-old-self worry about sustainable habits on Earth, other films like I, Robot have subtly impacted the way in which the public perceives robots — and therefore artificial intelligence (AI) — today.

There is a major divide between public perception and US government policy on the topic. In fact, the consistent image of robots threatening humanity — combined with opaque policy — has undermined the Department of Defense’s ability to work with the private sector on emerging technology. Several documents released recently — like President Trump’s Executive Order and the Department of Defense AI Strategy summary — begin to clarify how government intends to incorporate AI, and make clear that Hollywood has a new role to play.

What might be defined as “future technology” is already here, and the societal tension with “smarter” technology is increasingly palpable.

The Pentagon is now confronting what movies have been debating for decades: how to integrate artificial intelligence in daily life and in the military. What might be defined as “future technology” is already here, and the societal tension with “smarter” technology is increasingly palpable. Artificial intelligence is everywhere. It is Siri naming the song on the radio and Facebook automatically tagging photos. Yet this integration is a bit anticlimactic for a Hollywood blockbuster. Artificial intelligence is already revolutionizing society. It also has significant military repercussions — the kind that make their way onto the big screen. In reality, the military is currently focusing its AI research on menial tasks to increase efficiency like sifting through hours of drone footage and predicting military maintenance. But in the (way) long-term, AI will help autonomous weapons systems become more independent, sparking fears of killer robots (say, like this guy) on the battlefield.

The image of killer robots created by Hollywood has permeated the public debate, even at the international level, through advocacy work like the “Campaign to Stop Killer Robots.” But in the wake of canceled projects and ethical concerns, the Pentagon has begun to search for ways to rebrand its artificial intelligence efforts. The technological cutting edge is now in the private sector. Despite efforts to model development after private industry practices, the government struggles to keep up with startup innovation. To facilitate critical private-public partnership, the Pentagon is now streamlining the process through a “centralized AI portal… that details key processes, topics of interests and contacts.”

The 2018 AI Strategy provides a glimpse into how the US government is confronting AI along with the promise of an articulated “vision and guiding principles for AI ethics and safety in defense.” Articulated principles are critical in their goal to “bridge the divide between nontraditional centers of innovation” to maintain the technological cutting edge. The strategy also emphasizes the need to cultivate AI innovation outside of “core military applications,” which work to change the negative connotation of government partnership. However, a streamlined process and clear principles are unlikely to sway public opinion. Hollywood’s depictions of AI killer robots — even Disney’s Smart House takeover — foster public distrust of US government investment, while technologies like Siri and Alexa are readily accepted despite clear and present security implications.

Even efforts to push Siri and Alexa further, like making these devices a “Friend,” are generally accepted by the public. Acceptance makes companies more comfortable pursuing civil artificial intelligence, but they fail to acknowledge that the commercial sector takes “products 90% of the way to a useable military application.” While the Department of Defense works to expand and improve public perception of Pentagon AI research, Silicon Valley must confront the inherent military consequences associated with AI. In the world of AI, protecting technology to ensure it is used as intended may require working with the government.

The “Killer Robot” image has proven difficult for the government to shake. Recent AI strategies show the Pentagon is concerned enough to put effort into overcoming the stigma. While these documents mark a positive first step, it is critical for the government to continue to articulate its AI strategy and set norms for the ethical use of AI. While these decision makers work to integrate AI, Hollywood can positively impact the debate by sparking a conversation about the social and military use of AI. The Terminator may not be upon us, but Siri and Alexa have been here for years.

Erin Connolly is a Program Assistant at the Center for Arms Control and Non-Proliferation where she focuses on youth education, artificial intelligence, and nuclear security.

Erin Connolly

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.