“Is this your client?” Those four words in conjunction with an easily swayed jury are the stuff of nightmares for a trial attorney. As often as it may be the case that some dude off the street was unfairly charged with a crime by some power-drunk cop, it is a lot harder to get traction on that argument when there’s a picture or video of your client committing the act(s) they were accused of — you can go from zealous defender of the downtrodden to a real-time rendition of Shaggy’s “It Wasn’t Me” that way. But lo, the reality of that worry is approaching quickly. From Isha Marathe at Law.com:
[D]eepfakes, which have become more sophisticated and easier to create given the democratization of generative AI tools like Midjourney and DALL-E, are inevitably poised to permeate the legal process.
…
To be sure, its likely more AI-generated images will come into court as evidence. While some deepfakes will be caught by the first line of defense against inauthenticity—e-discovery professionals well-versed in the Federal Rules of Evidence (FRE)—others may be gatekept by a tech-savvy trial judge. Some deepfakes, however, will end up causing protracted “battles of experts,” or leaving unwanted impressions on a jury.
This bit of apparent sci-fi is worth taking seriously. Let’s remember that AI-generated photos went from looking like real-time stroke symptoms:
Name one thing in this photo pic.twitter.com/zgyE9rL2XP
— 𝒅𝒖𝒎𝒃𝒂𝒔𝒔 𝒂𝒔𝒔 𝒊𝒅𝒊𝒐𝒕 (@melip0ne) April 23, 2019
To this in a very short amount of time:
Were you fooled by these AI-generated images of Pope Francis looking stylish in a puffer jacket? And should what some are calling “the first real mass-level AI misinformation case” be a cause for concern? https://t.co/UJViDIntAj
— New Scientist (@newscientist) March 27, 2023
In passing, I’m sure we’ve all seen photos of politicians like this made by people who think referring to someone as Drump or Byeden constitutes a colorable argument:
https://twitter.com/cedmundwright/status/1650606021771927556?s=46&t=Y36Fcpj8aTBlTBMnZlc6bA
But do you want to be the attorney in a divorce proceeding a few years from now when tech even better than this produces a picture of your client French kissing their not-spouse? I’ll admit that unless your client has a lot of game, the judge is not likely to believe that your client was actually caught kissing Mila Kunis or something, but what about if the deepfake is something much more plausible like one of your client’s coworkers? And as easy as it is to just believe that if people really pay attention that they’d be able to distinguish between the real and the fakes…
"I applied as a cheeky monkey, to find out, if the competitions are prepared for AI images to enter."
A German artist has rejected an award from a prestigious international photography competition after revealing that his submission was generated by AI. https://t.co/4qGjDVbrGb pic.twitter.com/JFXE7dfkgL
— CNN (@CNN) April 19, 2023
The deepfake threat isn’t limited to just photographs either:
Hackers can mimic people you know by using AI to copy their voice and an app to change the caller ID.
“When I do that type of attack, every single time, the person falls for it,” said Rachel Tobac, an ethical hacker trying to raise awareness about scams. https://t.co/1cbZIDUUXj pic.twitter.com/xdTZ0sArTk
— 60 Minutes (@60Minutes) May 21, 2023
If you want to put your win/loss record on the notion that juries on average are better at determining the veracity of photographs or audio than whoever the people are that are judging prestigious international photography competitions or have their hard earned cash on the line, be my guest.
Fortunately, the penchant for analogical and adapting thinking that runs through lawyers like a pox has already led to some potent defenses against fabricated evidence. From Law.com:
Ron Hedges, a former magistrate judge for the District of New Jersey and the principal at Ronald J. Hedges, told Legaltech News that he believes the main issue around deepfakes in court is going to end up being about authentication, and then about admissibility.
…
“Number one: we’ve got existing rules that courts are going to have to use, because I don’t see any new rules coming down,” Hedges said, referring specifically to Federal Rules of Evidence Rule 901. “That’s a whole series of rules about authentication.
Until the courts figure this out, try not to piss off anybody who’s really good with photoshop or Midjourney.
Deepfakes Are Coming to Courts. Are Judges, Juries and Lawyers Ready? [Law.com]
Chris Williams became a social media manager and assistant editor for Above the Law in June 2021. Prior to joining the staff, he moonlighted as a minor Memelord™ in the Facebook group Law School Memes for Edgy T14s. He endured Missouri long enough to graduate from Washington University in St. Louis School of Law. He is a former boatbuilder who cannot swim, a published author on critical race theory, philosophy, and humor, and has a love for cycling that occasionally annoys his peers. You can reach him by email at [email protected] and by tweet at @WritesForRent.