Skip to content
We Didn't Start the Fire, data, foreign policy

Show Me the Data

Should US foreign policy be more data-driven?

Words: Alex Damianou
Pictures: Alex Wong

We Didn’t Start the Fire” is a column in collaboration with Foreign Policy for America’s NextGen network, a premier group of next generation foreign policy leaders committed to principled American engagement in the world. This column elevates the voices of diverse young leaders as they establish themselves as authorities in their areas of expertise and expose readers to new ideas and priorities. Here you can read about emergent perspectives, policies, risks, and opportunities that will shape the future of US foreign policy.

Policymaking is forecasting. It predicts that proposed guidelines and actions will yield a preferred outcome, which is no different from a financial analyst making securities trades, a business making strategic decisions, or non-profits choosing where to allocate resources. Yet, these two legs of the social contract — the private sector and civil society — better employ data in their decision-making processes than the government (unless it’s using polling and data analytics for election purposes). And even within government, foreign policy seems to lag in applying quantitative approaches, currently relying more on personalities and qualitative paradigms.

Foreign policymakers can benefit from a more data-driven approach that addresses technology, human capital, processes, and culture. And they should not hesitate to draw from the practices of other industries.


Colin Powell famously insisted on the State Department upgrading its IT systems and computers that predated the Internet upon taking over in 2001. In 2017, according to some State Department staff, Secretary of State Rex Tillerson asked how many personnel were working in Russia, requiring an actual on-the-ground headcount to determine an accurate number due to an unreliable cache of data sources.

Foreign policymakers can benefit from a more data-driven approach that addresses technology, human capital, processes, and culture. And they should not hesitate to draw from the practices of other industries.

The second anecdote underlines a fundamental issue with data-driven approaches: collecting reliable and timely data. Garbage in, garbage out is a common refrain from data scientists or finance bros. In other words, the quality of output is determined by the quality of input, no matter how unique your analytics are. It is, therefore, essential to have a robust knowledge management infrastructure and data collection processes in place. In addition, foreign policymaking requires synthesizing disparate data sources across various disciplines. Some of these sources rely on external stakeholders (e.g., nonprofits, think tanks) conducting research with sometimes unreliable methodologies and addressing questions that are not relevant, while others may not be at the level of granularity that policymakers require.

Currently, data collection efforts at the State Department aren’t systemic. How information is stored, accessed, and shared within the department is more ad-hoc and vulnerable to employee transition, while cross-departmental sharing is even harder given incompatible systems. The State Department must figure out a way to accurately catalog data (perhaps regular and scalable bespoke programs), measure diplomatic information gathering, quantify evidence-based decision-making, and put processes in place to ensure data is accessible to all those who need it in a timely manner across all sources.

Once data reliability is established, the next step is to determine what indicators you’re looking for and what frameworks and processes you’ll have to make decisions. A more streamlined approach to consistently monitoring indicators underpinning high-level policy questions enables proactive policymaking. Such an approach enhances the ability to robustly answer the inevitable stream of ad hoc, demand-driven policy questions while also reducing time spent answering them in siloes or pre-empting them altogether. All of this, of course, is assuming that the technology and domain expertise personnel are already in place. Given recent efforts by the Biden administration, including the launch of the new Center for Analytics and Enterprise Data Strategy and discussions with foreign policy staff, there are encouraging signs that progress is being made toward a more data-driven State Department. Yet, without effective processes and a more accommodating data culture, smart people with impressive technology will be limited in their success by biases and bureaucracy.


Policymakers already employ hypothesis testing, but not empirically enough. They can take a chapter out of finance, startups, corporations, and even poker here. And that’s consistent testing and iterating, along with unrelenting updating of probabilistic outcomes, to mitigate uncertainty in decision-making. For example, startups make assumptions about what problems users face and what they’d be willing to pay for. They can ask them during customer discovery, but it isn’t until users try the product or service that startups have actionable data to understand better what they need to build rather than what they think or want to develop. This framework led to successful pivots by well-known startups today, including Instagram, Slack, YouTube, and Twitter.

Similarly, financial traders update their algorithms with new information and get almost immediate feedback loops that allow them to iterate consistently, whether they were right or wrong. Corporations, especially those in extractive resource industries, such as mining or oil and gas like Royal Dutch Shell, have been at the forefront of applying scenario planning to make strategic decisions on operations in (emerging) markets. Governments have also used these, such as South Africa’s Mont Fleur scenarios. Even poker players assign probabilities to specific outcomes based on the information on the table.

The common denominator among these industries is they have measurable indicators to monitor, apply processes to mitigate uncertainty, and consistently update the data. It also compels decision-makers to be less emotional about long-held beliefs, mitigates biases, and evaluate decisions objectively. Data augments competency, a core pillar of trust and conviction, and when applied with forecasting frameworks, can prepare decision-makers for a range of outcomes. Even if the outcomes turn out suboptimal, the data-driven process ensures you don’t succumb to the resulting fallacy and correlate a bad outcome with bad decision-making — or thinking in bets, as a former professional poker player and behavioral scientist Annie Duke describes.

A few examples closer to geopolitics and employed mainly through macro strategy hedge funds can provoke more mindset change than any technological one. In his book “Superforecasting,” Philip Tetlock uses Brier scores to measure the accuracy of predictions. In “Geopolitical Alpha,” Marko Papic applies measurable constraints-based frameworks to understanding politics and their impact on asset prices. Rather than focus on stakeholders’ preferences or their personalities and succumbing to fundamental attribution error, you focus on the broader context in which decisions are made. These include looking at the political economy, macroeconomy, financial markets, domestic politics, and constitutional and legal constraints, and then quantifying and constantly updating, as constraints don’t change, but events can alter probabilities.

For example, instead of assessing China through President Xi Jinping’s preferences predicated on his ideology, Papic would instead look at his constraints through the “median voter.” China’s most significant constraint is a middle class that now makes up more than half of the population. Its total debt is three times as big as the economy, it is the world’s largest agricultural importer, the world’s largest importer of oil & gas, and has an aging population. These measurable data points underpin their constraints. And anything that impedes delivering economic growth to the middle class is a constraint, including challenging the US as a global hegemon. These factors already likely inform US policy toward China but aren’t being measured and indexed systematically to create signals and deltas that can be acted upon proactively and with more certainty.

Political risk indicators can also be quantified. GeoQuant is effectively building a Bloomberg Terminal for geopolitics. By drawing from structured data and scoring high-frequency events against a range of political economy dimensions (e.g., governance risk, social risk, security risk), they attempt to create an index with signals that can measure the direction and magnitude of risk. The point here is not that an algorithm will predict an outcome, rather it is an effort to develop a culture of data literacy and leveraging tools that can help one analyze a set of (now measurable) indicators (e.g., government or institutional strength, social polarization, human development, internal and external security, etc.) that can influence foreign policy outcomes in a more proactive rather than reactive manner. However, it will take a cultural and mindset shift to employ data-driven technology and processes effectively in foreign policymaking. As mentioned, the Enterprise Data Strategy, including the influx of data and analytics professionals and machine learning applications to topics such as China, diversity equity and inclusion, or multilateral institutions (e.g., text mining and predicting voting at the UN), are all encouraging and necessary advancements by the State Department.

US policymakers often have a policy position in mind and ask for data supporting that position. Yet, policymakers are skeptical of the data when it doesn’t support their position or indicates that another decision would be better. This has been salient with sanctions, a frequently wielded policy tool despite data suggesting their ineffectiveness and sometimes counter-productivity. If the decision-makers do not embrace data as complementary to their policymaking process and hold themselves more accountable with a scorecard-esque data-driven toolkit or can’t overcome biases, little will improve.

Furthermore, data literacy will have to extend to all stakeholders in the organization because today’s data team is the equivalent of the webmaster of the 1990s. You’re going to think it was ridiculous that one person had access to the Internet when everyone needed it to carry out their work. The same will go for data, regardless of the user’s technical ability. So integrating data teams will be vital while making data accessible and easily shareable across departments and agencies, rather than treating them as a back-office function.

You can’t improve what you don’t measure. And data isn’t a silver bullet. But, even if foreign policy feedback loops can be decades-long, promoting the pursuit of a more data-driven culture will have a material impact on the next generation of American foreign policy professionals.

Alex Damianou is the co-founder & CEO of OpenAxis, a venture-backed data storytelling platform startup. A political economist, he previously served as the National Policy Director & Foreign Policy Advisor for Andrew Yang’s 2020 presidential campaign.

Alex Damianou

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.