Skip to content
artificial intelligence ai national security primer

AI & National Security: A Primer

Words: Andrew Lilley Brinker

Pictures: Emma Marie Andersson
Date:

Since we first imagined AI, we’ve worried it would kill us. Artificial intelligence for years conjured up images of the Terminator, Arnold Schwarzenegger in shades and a leather jacket, CGI metal men sloughing their skin and hoisting giant guns in their arms. Maybe it conjured a little black panel, red eye in the center, staring blankly with murderous intent. Robots, the story goes, are faster, smarter, and stronger than we are. They can outthink us. Outrun us. Surely, you’d have to be mad to give one a gun? Yet current discussions in national security seem to contemplate just that.

If you’ve watched the news recently, you’ve probably seen big stories about AI: the White House has a new AI Executive Order, the Department of Defense has a new AI strategy, and Alexandria Ocasio-Cortez got a mix of heat and praise for pointing out AI bias. There’s a lot going on here, so let’s run through what AI is, what it’s good at, what it’s bad at, and how AI interacts with national security issues.

AI is a collection of techniques for building computers that can make decisions or determinations without guidance from a programmer or a human operator on how to do so. Traditionally, we think of programmers writing code consisting of explicit instructions or users having direct input to such decisions; “if this, do that.” In an AI system, no such instructions are given.

To understand better, let’s break down the three major ways an AI can be trained: supervised learning, unsupervised learning, and reinforcement learning.

Imagine we want to train an AI to recognize when a picture contains a human face. We do that by giving the AI a large group of pictures, some of which contain human faces, some of which don’t, and telling it which ones have faces. The AI processes this data set and tries to identify features in the images that tell it when a face is present. This is supervised learning, because we gave it training data that we’d already labeled with the answer, and the AI learned how to perform the task by analyzing our answers.

Now, imagine instead we’re an advertising platform like Google Ads, and we want to break down our users into categories so advertisers can target their ads to those categories. We could come up with categories on our own, but we might miss categories or make assumptions about users that turn out to not be useful, or to be wrong. So instead we give an AI system our user data, and tell it to group users that appear to be similar. This is unsupervised learning. Instead of us giving the AI patterns to learn to duplicate, we let the AI identify patterns itself.

Finally, imagine we want to build an AI to be very good at a board game. We give it the rules of the game, and then have it play games, maybe against other opponents, or maybe against itself. It plays a lot of games. Hundreds of thousands of games. Over time, it develops strategies to beat the game, becoming more effective the more it plays, without anyone ever explicitly giving it good strategies to use. This is reinforcement learning, and the example I gave is how AlphaGo, an AI that beat the world’s best Go player in 2016, learned to play the game.

Each of these methods has its strengths and its weaknesses. AI can be great at augmenting human capabilities, or automating human tasks, enabling us to operate faster and at greater scales. Think of every modern law enforcement show, and how they show computers matching faces against databases to identify some suspect, that’s something that would almost certainly include AI, and it’s an example of AI’s substantial utility.

However, AI systems have major weaknesses.

First, they are extremely sensitive to the data used to train them. Remember, they’re learning patterns based only on their training data. If that data isn’t carefully cultivated and managed, they can end up identifying patterns that aren’t there, or identifying patterns so specific that the AI is useless in the real world. As a real-world example of this problem: many facial recognition systems are much worse at identifying black faces than white faces, because many of the publicly available data sets are disproportionately white. Labeled Faces in the Wild, one of the most popular open data sets, was found in 2014 to consist of 83.5% white faces. When you train with a data set that omits black faces, you end up with an AI that is unable to identify them. As a worse example, current research indicates self-driving cars are more likely to fail to recognize black pedestrians than white pedestrians, meaning we should be concerned about self-driving cars disproportionately hitting black pedestrians.

As a real-world example of this problem: many facial recognition systems are much worse at identifying black faces than white faces, because many of the publicly available data sets are disproportionately white.

Second, AI systems can struggle to generalize their knowledge. One of the major challenges for self-driving car companies has been getting their cars to handle suboptimal conditions including country driving, poor weather like rain or snow, road obstacles, pedestrians crossing outside of designating crossing lanes, varied times of day and lighting conditions, and the presence of bicyclists, motorcyclists, and other alternative vehicles sharing the road. So while they may have a self-driving car that drives very effectively during the day, on a well- maintained road, with pedestrians respecting normal crossings, no bicycles or motorcycles, they’re a long way off from matching, much less surpassing, human driving skill.

Third, AI’s take time and can require substantial computing resources to train effectively, and because they don’t generalize well, it’s not usually feasible to take an AI trained for one task and repurpose it for another task (although this is an active area of research with some promising results for improvement). AlphaGo, the AI that beat the world’s best Go player, took the computational equivalent of three years of playing Go, running on high-end computing infrastructure, to become good enough to win, and that’s on top of the time needed to determine how to model Go for the AI, how to design the AI to be effective at learning Go strategies, and to build it. While follow up work has reduced the training time to 220 days on equivalent infrastructure, the investment needed in terms of time and resources remains substantial, particularly for new areas.

Fourth, all AI are subject to what are called “adversarial inputs.” These are inputs designed to subvert the AI’s determinations and decision-making processes. For example, an adversarial image may contain a human face but also include differences invisible to the human eye which cause an AI facial recognition system not to see the face at all. A memorable paper on the subject showed that such inputs are easy to create, and gave an example of an adversarial image depicting a cat that fooled an AI into identifying it as a bowl of guacamole. Back on self-driving cars: there was just recently a news story on how three carefully-placed reflective stickers could trick a Tesla’s self-driving functionality into driving on the wrong side of the road.

Fifth, AI models can leak information about the data used to train them. For example, say you’re a bank, and you train an AI system to identify fraudulent transactions by providing it a number of real-world transactions, some of them legitimate, some of them fraudulent. It is possible that this system may leak information about the real-world transactions you trained it with, potentially disclosing sensitive customer data in the process.

The weaknesses of AI give rise to risks in the fields which apply them. These risks can be broken down into three categories, taken from the works of Remco Zwetsloot and Allan Dafoe at the Center for the Governance of AI at the University of Oxford: misuse risks, accident risks, and structural risks.

Misuse risks are risks associated with AI being intentionally misused to create some harmful effect.

People talk about “deepfakes,” a technology used to create fake videos which show individuals doing or saying things they never said or did. People have used it to create pornographic images and video of popular celebrities or ex-romantic partners, and to create video of President Obama giving a speech he never gave. People also talk about the Chinese government’s use of AI to enable mass government surveillance and oppression, although the reality of Chinese oppression is that much of their system is still run by old-fashioned humans.

Misuse risks are a serious concern. Artificial intelligence techniques, in their ability to obfuscate processes and operate at scale, and in their ability to create credible fake materials, have the potential for abuse by many different operators: authoritarian regimes looking to scale oppressive mechanisms, abusers and extortionists looking to generate harmful or compromising material, charlatans and cheats looking to cast doubt on the truth or evade justice.

Accident risks are risks associated with unintentional errors, biases, or oversights in the design and development of AI systems causing real harm to real people. This is what Alexandria Ocasio-Cortez was talking about in her comments. AI-based bond-setting systems have been found to set higher bonds for defendants of color, for equivalent crimes. A research AI trying to learn English unintentionally learned racial biases against black and asian people. It’s not that these biases might not exist without AI, in fact AI systems are often replicating our biases, but that when bias exists in an AI system, it can be more difficult to identify, question, and resolve. AI systems may often be assumed to be unbiased, and put into contexts where there exists little recourse to second guess their outputs.

Creating unbiased AI systems is very difficult, and requires conscious effort to identify and account for potential biases, along with testing for bias in the model being created. Accident risks are pernicious because as AI tools are democratized, you are likely to see a number of people and organizations deploying them without due concern for the biases being embedded. This is why so much public attention is being brought to accident risks and AI bias, to make sure people in charge of deciding when, where, and how AI gets used are aware of the questions they should be asking.

Finally, you have the under-appreciated structural risks of AI. As it gets applied to concrete problems, AI is likely to change the dynamics of those problem spaces. In a microcosm, think of your interactions with an automated phone system. This isn’t an AI, but it helps to illustrate the point. A company wants to save costs by heavily reducing their phone-answering support staff, so they set up an automated phone system, which takes you through a collection of pre-recorded options, which you select, and which either provide you with information without human involvement (the company’s ideal goal) or route you to a human who is able to quickly handle your issue. In doing this, they change your relationship with the company, and the options available to you in the scenario. Instead of being able to appeal to a human, who can use their knowledge and intuition to route you to the right person immediately, or answer your question quickly, you must interface with a machine operating on pre-written messages. The timing is predetermined, and often slow. You end up spending more time on the phone than you want, and maybe struggle to find the right set of options to resolve your issue or get you to a person. At the same time, the company’s relationship to you, and to the issue of providing telephone support has changed. They don’t need as much staff, and they can handle a greater volume of callers at once (so long as the automated system stops many of them from needing to talk to a human). It cuts costs, but it also has the potential to stop the company from receiving and logging valuable customer feedback. Automation changes the way we relate to problems, and there are risks associated with those changes.

AI is poised to impact a number of national security sectors, including diplomacy, intelligence and counterintelligence, and warfare. Let’s sketch out some samples of how we may expect AI to affect these areas.

First, AI and diplomacy. Diplomacy relies on clear communication; immense amounts of time and effort are spent crafting public statements, and private diplomatic discussions rely on the building of trust and relationships between parties. The ability of AI to create fake materials and subvert those communications or those trusted relationships is a major risk for diplomatic stability.

More troublingly, AI-automated systems have the potential for substantial impact on deterrence and escalation dynamics. The normal functioning of deterrence is that there exists some human in the decision-making process who can be deterred, but if a response is automated, who will deter the automation? Similarly, the world has previously averted nuclear disaster thanks to the intervention of individuals who thought better of the readings of their sensors. Think of the 1983 Soviet nuclear false alarm. If not for the sensibility of Soviet Air Defense Officer Stanislav Petrov, the world might have plunged into nuclear war on the basis of a faulty alert system. The same dynamics apply with AI. In that instance, a human deescalated off the brink of armageddon based on their intuition; an intuition which would be extremely difficult to replicate in totality in an AI system.

Next, the world of intelligence and counterintelligence. The creation of credible fake materials here is concerning. Faked compromising materials, more credible than could be made before, could contribute to the turning of a human intelligence asset. More troublingly: AI-enabled mass surveillance and tracking could make operating in repressive regimes ever more difficult, and would almost certainly necessitate changes in HUMINT disguise doctrine to incorporate counter-AI principles. In the SIGINT world, what about an AI system that can create credibly fake signals to be collected?

Finally, to the topic that gets the greatest attention: AI and the military. This is certainly the flashiest area. If any AI topic most conjures up images of Skynet rising to kill us all, it’s the intersection of AI and military operations. However, the issue is much more complex than a gun that fires itself.

First, a note on the DOD and autonomous weapons. On the one hand, you have those who harbor intense fear of an AI “arms race” in which the DOD builds autonomous weapons out of fear that our adversaries are doing the same. On the other hand, you have those that assert that DOD official policy requires a “human in the loop” in all lethal actions taken. The reality of where the DOD intentions are is somewhere in the middle.

The DOD does have policy establishing standards for review of AI systems for approval, but that policy as written has no requirement for a “human in the loop.” Such a requirement is soft; part of DOD’s general practice for AI systems. That it is not codified makes it much more malleable than may be expected. If future DOD leadership decides to no longer treat a human in the loop as a requirement for AI systems, there is little to stop them.

At the same time, the norm that a human in the loop is required is a strong one within DOD. When Army Contracting Command — Aberdeen Proving Ground Belvoir Division recently posted a federal business opportunity for a program titled ATLAS (Advanced Targeting and Lethality Autonomous System), many people were concerned — (I mean, it has “lethality” and “autonomous” in the name!) — but the DOD was quick to clarify that ATLAS did not mean “building autonomous weapons,” but was instead looking at ways to use AI to improve other parts of military tanks, not the weapon part itself. In the DOD context, “lethality” means anything that makes a weapon more effective. For example, adding sensors to a tank is a “lethality” upgrade in DOD parlance.

While fully autonomous weapons may not be on the horizon with the DOD, that does not mean adversaries will refrain from creating them, and the introduction of autonomous weapons on the battlefield has major strategic implications. Even if autonomy is only applied to areas such as targeting, that still opens up counter-AI tactics which may be used against a target. Imagine targeted lasers to AI-enabled sensors, designed to induce incorrect decision making.

Think also about military information operations, where many of the same issues discussed in other areas apply. How do we maintain situational awareness of the battlespace in the context of credible falsified materials, or adversarial attacks against AI situational awareness systems?

AI also has clear implications for cyber warfare, on both offense and defense. On the one hand, an AI system may enable more rapid discovery and exploitation of vulnerabilities, or automated evasion of defensive measures. On the other, it can mean adaptive defenses and faster discovery and response to indicators of compromise. Either way, AI can serve to accelerate the pace of cyber operations.

AI also complicates cyber attribution, as autonomous systems may be credibly denied by foreign actors. The pall of automation has the potential to cast further doubt on attribution in cyberspace.

All of this is merely a survey of the implications for AI in the national security space. Suffice to say, it is much wider and more complex than autonomous weapons. The final concern in all of this is the question of whether the US will lead in this space. There’s reason to think we may not.

AI, as previously discussed, is data-hungry. To be effective, an AI system needs to train, and it needs to train with a large, diverse, representative data set. It’s no accident that the leaders in AI technology today are tech companies with immense data collection practices. Google collects information to feed into its advertising systems — the bread and butter of the company’s financial success — which can also be used to train AI systems.

While there is now a tide turning more attention to how much power tech companies wield, and to the sort of information these companies collect, we in the West are already strongly suspicious of similar actions by governments. The Snowden revelations of NSA surveillance programs like the implementation of bulk phone metadata collection under Section 215 of the PATRIOT Act or upstream data collection under Section 702 of the Foreign Intelligence Surveillance Act shocked the conscience of the country. Put another way: US values do not align well with a government which collects vast swaths of training data for the deployment of AI systems, and as discussed previously, given the risks of AI misuse and accidents, that skepticism is healthy.

The skepticism is also not shared by our adversaries. China has already begun practice and implementation of mass surveillance, the exact sort of which can feed AI-powered systems. As Chinese companies like Huawei continue expanding their reach into foreign countries, there is growing concern that data could be fed into Chinese government systems, or made available to Chinese government officials or bureaucrats.

As we begin to grapple with the risks and concerns of AI in the national security space, we must keep in mind the asymmetries between us and our adversaries. Human rights should not be up for negotiation; the question is how we will ensure superiority in the ever-increasing asymmetries of modern warfare, where adversaries will collect and exploit information we won’t, and deploy that information in AI systems at scale.

So let’s circle back now and talk about what’s happening currently.

The White House has put out a new Executive Order on AI. The EO calls out AI as “promis[ing] to drive growth of the United States economy, enhance our economic and national security, and improve our quality of life.” It identifies AI as a key national priority, and tasks each executive department with identifying opportunities to increase investment in AI research and development, workforce growth, and the development of sharable datasets and models. Symbolically, it’s a major signal of US attention to, and intention to invest in, AI technology. However, it does not come with funding attached, and the reality will be borne out in future budgets.

The federal government has for several years identified challenges and potential solutions in recruiting and retaining top computing talent; the subfield of AI talent faces an even more vexing set of challenges, in terms of heavy pay competitiveness. There is additionally a cultural challenge in integrating the modern US tech industry with the military, as seen in recent successful opposition by Google employees to Project Maven, a project in support of the DOD.

Additionally, the DOD recently released an unclassified summary of their new AI strategy, which lays out in more detail many of the same principles as the AI Executive Order. At the same time, this is a strategy only, and does not include specifically identified funding or tradeoffs which intend to be made. As the budget season winds up, watch for clues as to how the DOD is looking to accomplish their intended growth in AI capabilities and workforce.

It is also important to note that the DOD has stood up a Joint AI Center (the JAIC, pronounced “Jake”), which provides consultation and expert advice on AI technology and policy for the DOD. This is a highly-situated organization with, it appears, the ear of DOD leadership, and its stature is indicative of the degree of seriousness with which the DOD is approaching AI.

The reality of AI and national security today is that we are still very early in the adoption of AI technologies. The benefits and risks are still being understood, and the potential applications of AI technologies are still being developed. Moreover, the collective understanding of AI benefits and risks is still being promulgated, leaving a lot of room for learning and for mistakes in adoption.

As we move forward, it is important to retain consideration of AI fundamentals, and to take a view of AI and national security with a lens wider than the common focus on autonomous weapons. AI is not only a tactical development, but a strategic one, with long term implications for how we share and validate information, how we maintain situational awareness, and how we fight wars writ large. We may not be building Terminators prepared to overthrow humanity, but we’re not building harmless automatons either. We must tread intelligently and carefully as we apply this new technology to such a challenging and important space.

Andrew Lilley Brinker is a Lead Cyber Security Engineer at The MITRE Corporation who does information security by day and thinks about AI/counter-AI for national security by night. Follow him on Twitter @alilleybrinker.

The author’s affiliation with The MITRE Corporation is provided for identification purposes only, and is not intended to convey or imply MITRE’s concurrence with, or support for, the positions, opinions, or viewpoints expressed by the author.

Andrew Lilley Brinker


Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS