This analysis was featured in Critical State, a weekly newsletter from Inkstick Media and The World. Subscribe here.
When the Syrian civil war came to Aleppo, it was hardly the first conflict to define the ancient city. Aleppo’s Citadel alone hosts traces of fortifications built over four millennia, though the citadel has suffered damage, like the rest of the city, in recent years from the ongoing civil war. Mapping the damage of war is a simultaneously important and difficult task and often one at best tertiary to the survival efforts of those living in a besieged city.
Unlike besiegers of Aleppo in centuries past, modern observers can access satellite imagery of the city, taken before and over the course of the war. This kind of footage, collected, labeled, and analyzed, can offer insight into the patterns of war in a city, and in turn, serve as a starting point for useful research into urban warfare, civilian relief, and the shape of conflict.
Mapping the damage of war is a simultaneously important and difficult task and often one at best tertiary to the survival efforts of those living in a besieged city.
In “Monitoring war destruction from space using machine learning,” authors Hannes Mueller, Andre Groeger, Jonathan Hersh, Andrea Matranga, and Joan Serrat outline a method of augmenting human-labeled destruction in satellite imagery with machine learning. The result allows an automated tool to parse out a city, compare points in time, and identify areas of heavy destruction.
At present, such labeling is done by hand and can be combined with reports from people on the ground, but is limited to the speed of human observation, labeling, and detection. “An automated building-damage classifier for use with satellite imagery, which has a low rate of false positives in unbalanced samples and allows tracking on-the-ground destruction in close to real-time,” the authors write, “would therefore be extremely valuable for the international community and academic researchers alike.”
To make the model, the researchers trained a neural network to spot features of destruction from heavy weapon attacks, like artillery and bombings, which can be seen in the rubble of collapsed buildings and in craters. Then the model also looked at what undestroyed areas of cities look like, making sure the model can distinguish between existing buildings and ones hit by heavy attacks.
Part of this model meant classifying images in patches of approximately 1,024 square meters (32 meters by 32 meters). This resolution allowed for damage to be precisely mapped, with the destruction of larger buildings covering multiple patches, while a bomb loose in a neighborhood might cover only one. In a map of Aleppo produced under this model, the destruction is plotted in red, and places free from damage are marked green.
The authors explained that “roads and parks are clearly visible as dark green (lowest destruction probability) or yellow patches. This is not only evidence of the power of our approach in picking up housing destruction, but it also shows how the classifier has learned that roads and parks are never destroyed buildings.”
Ultimately, conclude the authors, “reliable and updated data on destruction from war zones play an important role for humanitarian relief efforts, but also for human-rights monitoring, reconstruction initiatives, and media reporting, as well as the study of violent conflict in academic research. Studying this form of violence quantitatively, beyond specific case studies, is currently impossible due to the absence of systematic data.”