Skip to content

American AI Arrives on Fortress Europe’s Borders. At What Cost?

Critics say emerging age estimation and surveillance technologies are putting human rights at risk.

Words: Katy Fallon, Giorgos Christides, Florian Schmitz, Deana Mrkaja, Marguerite Meyer
Pictures: Alexandros Avramidis
Date:

Shield AI’s headquarters are situated in a tall glass building along a wide boulevard in San Diego, a world away from where its tech, dreamed up in the sun-washed streets of California, has been tested above the Bulgarian forests. For asylum seekers trying to cross from Turkey to Bulgaria — and into the European Union — there is a good chance that in the future they may encounter one of the company’s surveillance drones.

Shield AI was founded as a startup in 2015 by Andrew Reiter and brothers Ryan and Brandon Tseng. The company is now valued at around $5 billion. It claims that during its participation in a 60-day pilot at the border in August — coordinated with Frontex, the EU border agency — “both irregular border crossings and criminal activity” saw a “significant reduction” thanks to the use of its vertical-take-off (or V-BAT) drones. How this was achieved remains unclear. The drones include an onboard AI pilot that enables a fully autonomous flight even without GPS. Frontex agreed there had been a reduction in crossings, although neither Frontex nor Shield AI would say what the precise percentage is. Shield AI is active across Europe and has sold these drones to other EU states, including Greece and the Netherlands. The company also has a lucrative $198 million contract with the US Coast Guard announced in 2024. 

A new joint investigation by Inkstick, Solomon, Taz, and SWI Swiss Info reveals how new technology is rolled out across Europe to deter irregular migration and handle asylum claims, often with little scrutiny and thin safeguards, that can affect even underage refugees. It also shows how the EU is increasingly moving towards “smart” border control solutions, which either use or have the potential to use artificial intelligence — despite officials downplaying or hiding this trend behind less ominous terms like “algorithms.” 

The Bulgarian trial, according to two officials present at a closed-door meeting in Warsaw in September 2025, was part of an “Innovation” presentation. Featured on the screen was a drone-based surveillance network piloted under a Frontex contract. Assets tested included V-BAT drones that hovered along Bulgaria’s border with Turkey, streaming real-time video to a command center and flagging movements to operators. 

Bulgaria is among a number of European countries accused of systematically violating the rights of asylum seekers. Shield AI did not respond to requests for comment on the system, its deployment, costs, or data use. Frontex, in a written response, said that the “preventative” effect of the Bulgarian pilot was an “indirect outcome” and was conducted under a “research and innovation pilot, which includes comprehensive fundamental-rights safeguards.” 

The European Commission has said that research pilots remain outside of the full legal framework. Dr. Niovi Vavoula, an Associate Professor and Chair in Cyber Policy at the University of Luxembourg, told the investigation that such exceptions are merely semantic once these systems are tested on real people, “we are past the purely testing research phase.” She added that any research exceptions “shouldn’t be applicable.”

From the beginning of people’s attempts to reach Europe, often aboard overpacked, flimsy vessels leaving from Libya, aerial assets sold by defense companies and deployed on behalf of European states and agencies buzz above their heads. 

Along the voyage, an array of Israeli tech could be monitoring their movements, from Israel’s IAI’s Heron surveillance drones leased to Frontex for Mediterranean patrols to Elbit Systems’ Hermes 900 drones (used in the skies of Gaza since 2014) used on unarmed EU maritime patrol missions. In 2025, Frontex renewed the IAI-made Heron 1 drone for another four years.

If people make it to European soil alive — thousands do not — they frequently try to leave countries of first-arrival like Greece and reach wealthier nations such as Germany, where living conditions and prospects of employment seem better. In the reeds of northern Greece, according to border officials who spoke to the investigation, the movement of storks often gives away the positions of migrants hoping to do just this; soon a series of sophisticated tech will take over from nature’s watchmen. 

Public announcements, internal minutes, and technical documents reviewed by this investigation detail plans for the deployment of Mobile Incident-Management Centers (MIMCs) in the country’s northern border, the main exit route for migrants trying to reach Northern Europe. All-terrain vehicles will be equipped with thermal cameras, drones, and encrypted comms, working alongside new fixed surveillance structures feeding live video and alerts into central control hubs and command posts.

For those who make it from southern Europe all the way to its tip in northern France and the beaches around the city of Calais, refugees might come into contact with even more American technology invested in monitoring their movements. 

Anduril Industries (one of several American defense companies borrowing its name from the Lord of the Rings franchise) has a role to play in monitoring asylum seekers being crammed onto smugglers’ overcrowded boats to cross the English Channel, the stretch of water between France and the southern coast of England and one of the busiest shipping lanes in the world. Founded by Donald Trump supporter Palmer Luckey in 2017, Anduril is now valued at over $30.5 billion

In a recent investigation, The Bureau of Investigative Journalism laid bare the breadth of the company’s lobbying activities in the UK, including the hiring of at least 11 former defense ministry staff. Anduril has sold a number of “Sentry Towers” to the UK Home Office, similar to those along the US-Mexico border. The towers monitor migrant crossings and use the company’s “Lattice” AI system. They are shrouded in secrecy, with the British government often refusing to release precise details about their number, positions on the southeast coast, or contracts. Previous freedom of information requests filed by researchers have revealed the Home Office has paid over £16 million (around $21.2 million) for a three-year period for one of these contracts,

Anduril’s expansion into Britain reflects a broader trend: US companies are increasingly supplying not only border hardware but also the AI tools that shape asylum decisions. This investigation has obtained the names of seven companies involved in a controversial trial run by the UK Home Office, which plans to use AI to estimate the age of newly arrived migrants. The Home Office claims that Facial Age Estimation AI, a process in which an individual’s age is determined by comparing their facial biometrics to a large dataset of faces with known ages, “offers a potentially rapid and simple means” to verify someone’s age.

Molly Buckley from the Electronic Frontier Foundation said that the organization has concerns over the technology, including reports of its accuracy levels. “When used for age estimation, facial scanning is often inaccurate. It’s in the name: age estimation,” she said. Buckley explained that every form of face-scanning technology has an error rate, which means some adults will be deemed minors and vice versa, with limited options for recourse. “Age estimation is also discriminatory. Studies show face scans are more likely to err in estimating the age of people of color and women, which means that as a tool of age verification, these face scans will have an unfair disparate impact on the human rights of people seeking asylum.”

Three of the seven companies involved in this trial are American, including Trust Stamp, a Georgia-based company that bills itself as an “AI-powered identity verification.” Trust Stamp had a controversial contract with ICE, which was terminated in 2022. Although no link was made public, the company had suffered a significant data breach of its migrant-tracking app used by ICE earlier in the year. Aware Inc and Paravision AI, both similarly headquartered in the US, have had their software used by multiple US government bodies, including the FBI and the US Air Force. None of those involved in the trial appear to have ever had their facial age estimation technology used on asylum seekers. Aware Inc, Paravision AI, and Trust Stamp did not respond to request for comment.

The Home Office did not provide answers to a lengthy list of questions about whether a human-rights impact assessment had been completed, about which experts had been consulted, or about any further details of the implementation process. In a statement, it insisted that “robust age assessments are a vital tool in maintaining border security,” with a reported plan to integrate it into systems in 2026. 

Child protection advocates, meanwhile, say these assurances fall far short of addressing the lived consequences for young people misidentified by automated systems. Maddie Harris from the Humans for Rights Network said that “introducing facial age estimation does nothing to address this significant safeguarding issue of determining children to be adults and placing them in unsafe settings exposed to acute harm.” Harris added, “Our concerns include the ability for a child to consent to an AI age check, inherent racial bias and the data the tool will have access to reach its decision on a person’s age.”

Harris questioned how such technology will adequately take into account “the complexities of human life, including long and traumatic journeys to the UK, and how our experiences manifest in our faces and bodies.”

Age estimation is not the only area where American-made AI is being inserted into asylum decision-making. Foxglove, a nonprofit that litigates and advocates for “fairer tech,” shared its submission to the UK Parliament’s Joint Committee on Human Rights ​​and the Regulation of AI. The submission flags concerns about three AI tools used or tested by the Home Office, including the proposed facial age assessment as well as one to auto-summarize asylum interview transcript and another to condense and summarize country information for asylum decision-makers. 

The nonprofit says it has confirmed that the transcript-summarizing tool is based on San Francisco based OpenAI’s GPT-4. The tendency of such models to “hallucinate,” or to produce false but plausible results, is well documented, Foxglove said. In trials, from the Home Office’s own admission, 9% of summaries were reportedly so inaccurate or incomplete they were unusable. 

“By definition, asylum seekers have an increased need for privacy and protection.” – Molly Buckley

Foxglove’s Donald Campbell said it is “surprising” the Home Office calls that “a small proportion.” With tens of thousands of asylum claims a year, he explained, such an error rate could affect thousands of cases if scaled up. 

“It is hard to see how unleashing these dangerously unreliable tools on some of the most vulnerable people — including child refugees — is anything other than a recipe for disaster,” Campbell added. “At the very least, the Home Office should be providing a higher level of transparency and accountability, to ensure that errors and harms are rapidly identified. Instead it is refusing to release even minimal information on the risk assessments it has undertaken. This secrecy suggests the Home Office knows the tools come with significant, known risks, but is determined to push on regardless.”

The EFF’s Buckley agreed. “By definition, asylum seekers have an increased need for privacy and protection. To force them to hand over their most sensitive data to UK authorities and AI companies as soon as they reach UK shores raises significant concerns. Where does that biometric data go, and who is it shared with? Can it be purchased, used by law enforcement, or stolen by bad actors — either in the UK or in the country they’re fleeing? Deploying this unreliable, discriminatory, and dangerous surveillance technology at the border just to save a few pounds creates massive risks to the human rights, safety, and well-being of asylum seekers.”

The tendency to downplay or reframe AI use is mirrored in how authorities describe the role of the technology within decision-making processes. The UK Home Office, for example, has said that the trial of AI on age-contested children at the border will only be “assistive,” with the final decision making remaining in human hands. 

Estelle Pannatier from Algorithm Watch explained that “assistive” AI nonetheless comes with its own pitfalls. “We have observed that people who seek advice from algorithms can, over time, become more dependent on the system and less critical of it,” Pannatier explained. “Thus, a tool developed to assist in decision-making can, in practice, be used as an instrument for making decisions. In theory a human oversees the system … but that person relinquishes their supervisory responsibilities and decision-making authority to the algorithm. It is therefore essential that appropriate processes be put in place to reduce these ‘automation biases.’”

Meanwhile, there is an apparent lack of transparency surrounding the AI technology being deployed at Europe’s borders. This investigation’s requests for precise details on the Bulgaria pilot were denied. Requests for UK performance reviews regarding Anduril’s Sentry Towers were denied, while the Home Office did not answer questions about whether a human rights impact assessment or a data protection impact assessment had been completed on the facial age estimation technology. 

These examples merely underpin a bigger problem, according to Bram Vranken, a researcher at the Brussels-based watchdog Corporate Europe Observatory. “Border management is being seen as crucial to national security and has become securitized. So, everything to do with borders is being more and more exempt from democratic oversight, accountability, [and] transparency.”

As countries from Bulgaria to the United Kingdom increasingly rely on AI and other technology to conduct border enforcement, rights groups have become increasingly concerned. In a 2024 report, Amnesty International warned that “digitized interventions” on borders contribute to “weakening human rights protections for migrants and asylum seekers.” In 2023, the EuroMed Rights watchdog said the “use of new technologies is now firmly cemented in the EU’s border policies,” noting that they “underpin invasions of privacy, brutal violations of human rights” across Europe. 

Criticism or not, lucrative contracts for tech companies have continued to pile up. To Dr. Vavoula, the risks these multimillion dollar technologies pose are clear: “These tools are not trained to show compassion.” 

*This investigation is a joint collaboration between Inkstick Media, Solomon, Taz, and SWI Swiss Info. This investigation was supported by grants from the Investigative Journalism for Europe Fund and the Pulitzer Center.

Katy Fallon, Giorgos Christides, Florian Schmitz, Deana Mrkaja, Marguerite Meyer

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

album-art

Sorry, no results.
Please try another keyword
  • A quick note: Independent journalism like Things That Go Boom only exists because of listener support. And right now, Newsmatch is doubling all donations — making it a powerful moment to give. If you love our show (and we hope you do!) consider making a tax-deductible contribution today. 👉 https://inkstickmedia.com/donate/ Enjoy the show! For decades,[...]
00:00
album-art

Sorry, no results.
Please try another keyword
  • As violence continues in Gaza, a new strategy inside the Palestine solidarity movement is taking shape — one aimed not at city streets or college campuses, but at the arteries of the global economy. Around the world, dockworkers have refused to unload ships tied to Israel’s military supply chain. In Italy, Morocco, India, and Sweden,[...]
00:00

SIGN UP FOR OUR NEWSLETTERS