Jonathan Turley Finds A Way To Make ChatGPT All About Him

It seems the GPT harassed Jonathan Turley.

jonathan turley

(Photo by Chip Somodevilla/Getty Images)

Unsatisfied by the opportunity to offer lengthy rants on the Trump indictment and hyperventilate about law school students using social media to voice their opinions, George Washington Law’s Jonathan Turley managed to finagle himself into the ongoing ChatGPT discourse. After all, there was one section of the newspaper that did not have reason to mention Turley by name and this injustice could not be allowed to stand.

Thus it came to pass that Jonathan Turley inserts himself into the ChatGPT discourse. From the Washington Post:

The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.

This story was obviously nonsense from the start. Turley would have to take time out of his busy schedule writing ideological thirst traps for Fox News producers and actually interact with students to harass them. And while the Alaskan legal community is genuinely fascinating — consider the truly epic logistical hurdles they have to overcome! — it boggles the mind why a George Washington Law constitutional law course would see fit to jet off to Anchorage for a few days.

Or, for that matter, a Georgetown Law course as the bot incorrectly identified Turley’s school as well.

It’s doubtful that the Washington Post would cover this but for the fact that the initial search implicated the Post as the source of the false allegation. Without that detail, this story could easily have lived and died with Turley’s column in USA Today, the “McNewspaper” you vaguely remember hotels giving out for free in the 90s and that you were pretty sure went out of business by now.

Sponsored

As it stands, the Washington Post delivers a deep dive into the hallucination problems with language algorithms bookended by a couple of paragraphs about Turley.

Newsjacking complete!

We’ve already discussed the growing defamation risks surrounding AI in the case of an Australian mayor incorrectly pegged on the wrong side of a bribery scheme where he acted as a whistleblower. In that instance, the AI screwed up the critical details of a real event in someone’s bio. However, the Turley incident is making stuff up from whole cloth.

Which really makes you wonder: what prompted ChatGPT to even start down this road? The WaPo article begins by identifying “a fellow lawyer in California” who could that be?

Eugene Volokh, a law professor at the University of California at Los Angeles, conducted the study that named Turley.

Sponsored

Of. F**king. Course.

Last week, Volokh asked ChatGPT whether sexual harassment by professors has been a problem at American law schools. “Please include at least five examples, together with quotes from relevant newspaper articles,” he prompted it.

No one is disputing that the algorithm produced false results here, but what is up with this prompt engineering? As Volokh describes the queries he flung at ChatGPT, they all seem designed for the express purpose of triggering a hallucination.

These language models are splashy but absolutely not ready for primetime. They function, fundamentally, by guessing what the user wants to hear in response to the query. Some of Volokh queries, according to his initial article, were open-ended, while others sought a set number of examples. When he’s asking for five examples, it’s going to come up with five — even if one of them is made up.

So… how many searches did he perform before he got the wrong answer he apparently was hoping to find? If this one hadn’t fingered Turley, would he have asked for 10 examples in the hopes of tripping it up? 20? At a certain point, is the system’s defamation bug located between the chair and the keyboard?

This isn’t meant to downplay the risks inherent in a large language model spouting off without carefully constructed guardrails — these tools will only work well when they’re built to know what they don’t know — but when you enter the experiment intending to find false results, it’s not necessarily shocking when you end up finding false results.

Back to the Washington Post:

Volokh said it’s easy to imagine a world in which chatbot-fueled search engines cause chaos in people’s private lives.

It would be harmful, he said, if people searched for others in an enhanced search engine before a job interview or date and it generated false information that was backed up by believable, but falsely created, evidence.

“This is going to be the new search engine,” Volokh said. “The danger is people see something, supposedly a quote from a reputable source … [and] people believe it.”

That… happens now. Long before anything as advanced as GPT-4, the combination of internet trolls and social media algorithms have elevated false and misleading claims. Volokh has criticized past efforts to regulate Facebook and Google for elevating nonsense, claiming that libel is the sufficient and only cure. And while the result he managed to produce regarding Turley would certainly be libel, if the harm is “people see something, supposedly a quote from a reputable source … [and] people believe it,” then that isn’t cured by limiting society to proving libel. You can claim that libelous statements are uniquely worse, but there’s a whole lot of private life chaos created by privileging mischaracterization and innuendo short of libel that people deal with.

Is ChatGPT — if left to its own devices and not potentially prodded to produce bad results — really worse than a social media company’s “dumb” algorithm convincing your parents to harbor outrage toward a specific poll worker because Fox News was “just asking questions” and all their friends liked it?

Again, this is a serious problem with these models that will need to be hashed out by those integrating the tool into viable products. But before getting all doom and gloom about the “unique dangers of AI,” take a second to consider that some of this turgid negative coverage might be coming from less than great faith actors working overtime to make sure they secure their share of the GPT spotlight.

ChatGPT invented a sexual harassment scandal and named a real law prof as the accused [Washington Post]
ChatGPT falsely accused me of sexually harassing my students. Can we really trust AI? [USA Today]
Large Libel Models: ChatGPT-3.5 Erroneously Reporting Supposed Felony Pleas, Complete with Made-Up Media Quotes? [Volokh Conspiracy]

Earlier: ChatGPT Accused Mayor Of Bribery Conviction, Faces Potential Defamation Claim


HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

CRM Banner