Government

ICE Using ChatGPT To Write Use-Of-Force Reports, As Fascism Meets Laziness

History is written by the victors' chosen AI bot.

Sure, becoming an ICE agent sounds fun, but in between all the tear-gassing of clergy and shooting pepper balls at journalists, the job involves a lot of pesky paperwork. I mean, the government simply doesn’t pay enough with its [checks notes] $50,000 signing bonus, 25 percent premium pay, and $60,000 in student loan repayment to justify taking 20 minutes to write a book report about breaking someone’s car window! After a long day of pulling guns on combat veterans and telling them, “you’re dead, liberal,” who has the patience to sit down and chronicle these events just because it’s the quote-unquote “law”?

Fear not! Just fire up ChatGPT and tell it to turn its statistically significant word salad powers toward turning “picked up someone, idk, they looked vaguely Mexicanish” into an official, if probably hallucinated, report.

Because this administration isn’t just about breaking the law, it’s about breaking the fundamental concept of “effort.”

The latest installment in Judge Sara Ellis’s seemingly never-ending mission of reading the riot act to the actual riot police, arrived as a 233-page opinion that reads like the tutorial level for a role-reversed Wolfenstein game. Judge Ellis’s account of the Trump administration’s ongoing experiment with turning paramilitary thugs loose on Chicago includes body-cam footage contradicting official narratives, false testimony, and the aforementioned “agent rolled down his window, pointed a handgun out of it, and said ‘bang bang’ followed by something like ‘you’re dead, liberal.'” Agents claimed protesters threw bikes at them (footage showed agents grabbing and throwing the bikes). They said shields had nails in them (footage showed cardboard). They identified “Latin Kings” by their “maroon hoodies” (maroon isn’t a Latin King color, and one person in maroon was an alderman).

And so on, and so on.

But nestled among the higher voltage abuses is this gem of a footnote (flagged by the Chicago Tribune’s Jason Meisner):

The Court also notes that, in at least one instance, an agent asked ChatGPT to compile a narrative for a report based off of a brief sentence about an encounter and several images. 

Whatever qualms one might harbor about AI-assisted drafting, there’s a difference between asking a language model to “help me polish this memo” and and “here’s a picture and six words, please brainstorm why that grandmother shouldn’t have mouthed off like that if she didn’t want a billy club to the solar plexus.” A use-of-force report isn’t a diary entry from the front to be read like a “My Dearest Emily…” letter in some future Ken Burns rip-off documentary about the Great Siege of Michigan Avenue. It’s evidence! And this turns it all into constitutional slop.

While the justice system gnashes its teeth over a hallucinated case citation, Trump’s immigration goons have urged us all to hold their figurative beer.

To the extent that agents use ChatGPT to create their use of force reports, this further undermines their credibility and may explain the inaccuracy of these reports when viewed in light of the BWC [body-worn camera] footage.

Judge Ellis stakes her claim to the 2025 understatement of the year trophy.

A cornerstone of America’s looming AI crisis is everyone’s unswerving belief that AI should be used for tasks that it absolutely cannot perform. At the top of the tech world, this fixation drives the cash-hemorrhaging effort to build “general intelligence,” a genuine artificial person that they can pretend would’ve dated them in high school. While researchers in China are building smaller models capable of handling the mundane writing and code clean-up tasks that AI can reliably handle, American AI companies are throwing exponentially increasing resources toward diminishing linear gains to build a bot that could achieve the private equity investor wet dream of an economy with zero actual workers.

But selling this vision to the masses requires messianic messaging about AI’s “potential” to shoulder burdens that it’s incapable of shouldering. AI is great at cleaning up a run-on sentence. Not so good at coming up with your whole motion to dismiss from scratch.

And alarmingly, unconstitutionally terrible at producing an accurate account of a law enforcement incident that it didn’t see based off a one-sentence prompt!

The second Trump administration thrives upon “weaponized laziness.” The appointments are half-assed, the foreign policy is half-assed, and the transportation policy is so half-assed, it’s devolved into complaining that people are dressing half-assed. But unlike the passengers strolling the terminal in pajamas, the Trump administration’s half-assery is focused on the most mendacious, cruel, and dangerous short cuts to life. Into this cretinous brew, “ChatGPT use-of-force reports” are just another dog-bites-man story.

And in this metaphor, the dog is a German Shepherd K-9 and the man is an American citizen who happened to be standing outside Home Depot at the wrong time.

Like most AI errors, the fault isn’t with the technology, but with the professional lapses involved in misusing it. ChatGPT wasn’t the one brake-checking civilians to cause accidents as an excuse to justify force or calling neighborhood residents in Halloween costumes “professional agitators.” These yahoos ran a shoot-first-ask-questions-never operation before ChatGPT arrived on the scene.

The irony that ICE is harassing people working for a living (hat tip to Brett Kavanaugh for the working prong of the new racial profiling test) and then outsourcing its own actual work to a stochastic parrot is appropriately dystopian. But it’s certainly lost on the government driving this policy.

Maybe ChatGPT can explain the joke to them.


HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter or Bluesky if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

1 2Next »