Technology can be an asset for advocates, too, Turner Lee points out. Until its use is curbed. “We have seen citizens take advantage of this technology capacity to be a public square and create these tools to alert people to ICE in their community. But just recently, they were told to take those down.”
She was referring to apps like ICEBlock and Red Dot, which helped people map the location of ICE agents in their communities. ICEBlock was recently removed from the App Store platforms after the Department of Justice pressured Apple, as was Red Dot shortly afterward.
“This is an interesting time where technology, which has always been the lowest barrier to engagement and entry, is really being stifled on the part of communities that are trying to bring some order to the chaos” Turner Lee says.
Of the roughly $170 billion the “One Big, Beautiful Bill” directs toward immigration enforcement, several billion dollars are explicitly earmarked for surveillance and screening technologies, including border surveillance systems and AI-powered tools that analyze travel patterns. Hundreds of millions more are set aside for biometric entry-exit systems, meaning the law will expand the government’s capacity to track, identify, and monitor immigrants through technologies that rely on automation rather than human judgment.
Meanwhile, AI investment is also flourishing at the local level.
Ian Adams, a criminology professor at the University of South Carolina, says there simply isn’t enough evidence to know the extent of the harm or benefits created by this technology, but that hasn’t stopped cities and police departments from making major investments.
“My biggest critique right now is essentially that there is an unregulated buying frenzy going on, which is sucking out public dollars for a bunch of products that the best evidence thus far shows don’t actually accomplish even their most simple goal,” Adams says.
One of those goals is the writing of police reports. Studies show AI does not improve the ability to write reports faster (which was the tech’s main selling point for that purpose) and worse yet, it could diminish the accuracy of the reports.
“The drum I keep beating is we need independent careful experimentation to prove that these technologies can achieve even their first stated goals, let alone the unintended consequences and costs,” Adams says. “We need both, and we don’t have either.”
The proliferation of these tools affects everyone, multiple experts point out, but immigrants are especially vulnerable given the Trump administration’s unrealistic deportation goals.
Further, as shown by the data behind “Midway Blitz,” easy access to high-powered technology makes it even easier to find and detain people who’ve committed no crime, and immigration agents — and the government at large — have demonstrated a willingness to persecute people with little or no evidence of any wrongdoing.
In March, Mahmoud Khalil, a lawful permanent resident and student at Columbia University, was detained by ICE and had his visa revoked because of his participation in protests for the Palestinian people. His case drew attention because AI-powered surveillance and data analytics systems — including social media monitoring and pattern-recognition tools from companies like Palantir and Babel Street — were reportedly used to identify and flag him for enforcement.
More recently, the University of Washington’s Center for Human Rights published a study detailing how Washington state’s network of Flock cameras — powered by AI — enables federal immigration authorities to access residents’ vehicle data, often without their knowledge or consent.
“One of the very first experiences that I had with folks under surveillance was very heartbreaking.” – Marianna Poyares
Marianna Poyares, a postdoctoral fellow at the Georgetown Center on Privacy and Technology, is gravely concerned about the ways AI tools are costing immigrants their privacy and dignity. Her recent work has largely focused on SmartLINK, ICE’s smartphone monitoring app, deployed as part of the “Alternatives to Detention” program. In addition to the app, other technologies such as ankle monitors are offered as “alternatives.”
“One of the very first experiences that I had with folks under surveillance was very heartbreaking,” Poyares tells Inkstick.
She was in contact with a family of asylum seekers from Guatemala, and during a meeting, she noticed the husband, a shy, quiet, man, kept adjusting the hem of his pants.
“At some point his wife looked at him and said, ‘It’s okay, you can show it to her,’” Poyares recalls. “And he was very embarrassed and he lifted the hem of his pants and I saw the ankle monitor. Before I could say anything, he looked at me and he said, ‘Please don’t think I’m a criminal. I’m not.’”
Poyares later learned the man was suffering from depression, and she arranged for him to get mental health support. Even still, she says the SmartLINK app is not the improvement it might seem to be.
Some immigrants are required to use the app as a digital check-in tool in place of in-person reporting. Poyares was recently in contact with an asylum seeker who found work as a truck driver, but he was often unable to travel where he needed to go because it was beyond the geographic limits the app would allow.
“He would give notice, and even then the app would alert his officer and he would get called and have to return,” Poyares says. “So eventually, he lost his job.”
Poyares notes that asylum seekers are “true survivors,” people who managed to escape terrible conditions and are now working toward a better, more hopeful life.
“They feel like they should be able to breathe once they come to the United States, and they can’t,” she says. “They come here and they’re marked with these devices as if they are coming from a criminal legal environment. That’s very difficult, very heavy for them.”
Emily Tucker, the executive director of the Georgetown center where Poyares works, takes issue with the way the word “AI” is used. The way she sees it, it’s become a misleading catchall for massive data processing, and as a result, much of our discussions around this technology obscure the central problem: the unregulated capture and use of data.
The most promising path for meaningful change, she says, lies in grassroots work. So-called “AI governance” initiatives are ultimately unlikely to challenge the power of the vastly powerful tech executives whom she likens to oligarchs.
“To push back against the power structures that use data to maintain power and a kind of authoritarian police state, which is in cahoots with the tech industry, is going to be a hard road,” she says.
Some people are paving that road, but Tucker points out that a national opposition movement has yet to coalesce around data capture. That’s partly why she and her colleagues recently launched the Library of Babel Group, an international working group of teachers and educators who want to resist AI products and surveillance tech in their schools.