Do NOT, I Repeat, Do NOT Use ChatGPT For Legal Research
These chatbots are bald-faced liars that pull facts out of thin air.
In light of recent events, I feel obliged to write this post, if only to create an internet breadcrumb upon which over-caffeinated lawyers hellbent on cutting legal research corners might stumble.
Are you one of those lawyers? Are you pressed for time with a looming deadline for a responsive motion? Did you hear about this thing called ChatGPT, and now you’re wondering if it will conduct legal research and write a complex brief for you? Did you subsequently sign up for a free Open AI ChatGPT account, and after testing it out with a few simple legal questions, are ready to submit a query asking it to draft a lengthy brief?
If you’re guilty as charged, and you’ve caught the ChatGPT fever, listen closely. No, I mean really listen. Move closer to your screen. Even closer. Closer still. Perfect.
ChatGPT Is Not A Legal Research Substitute
Now that I’ve got your absolute attention, hear what I say: You must — and I cannot emphasize this enough — RESIST following through with that plan of action. ChatGPT, Google Bard, and Bing Chat are not substitutes for legal research tools. These generative AI tools can do a lot of things fairly well (more on that later) but legal research is not one of them.
Here’s why: In their current iteration, generative AI tools make shit up. These chatbots are bald-faced liars that pull facts out of thin air (called “hallucinations”), including legal cases. They’ll provide false case citations and, when asked, will provide the full text of nonexistent cases to support the fictional citation.
ChatGPT is not a reliable legal research tool. Do not use it for this purpose.
Otherwise, you’ll end up like the lawyers in the highly publicized New York case who submitted motion papers to the court that were based on fake cases generated by ChatGPT. Or more recently, the Colorado attorney who was guilty of the same transgression. And I know you don’t want that.
ChatGPT Can Do A Lot Of Things
That being said, you shouldn’t be afraid to learn about, and selectively use, generative AI tools. This technology is groundbreaking and is improving at an exponential rate.
There are many ways you can use ChatGPT that will avoid the hallucination issue while still providing significant value. Typically, if you use it in limited, targeted ways that reduce the potential for hallucinations, you’ll find it to be an incredibly helpful tool.
A good rule of thumb is that the more precise your query (also known as a “prompt”) and the higher the quality of data provided for analysis, the more useful the output will be.
There are many ways lawyers can use ChatGPT, including brainstorming an issue, drafting a document template, or summarizing provided documents. These types of inquiries reduce the need for legal expertise or analysis as the output and instead utilize the chatbot as an assistant that can handle mundane work or provide creative inspiration.
Here are a few ways lawyers can responsibly use ChatGPT:
- Summarize or define a legal concept
- Summarize a case
- Summarize transcripts
- Create initial drafts of sample agreements like NDAs
- Provides ideas for topics to cover during direct or cross-examination
- Brainstorm voir dire questions
- Draft client intake forms
- Draft a retainer agreement
- Draft letters to clients or opposing counsel
Mind Your Ethical Obligations
And of course, it goes without saying that when you use generative AI tools, as part of your ethical duty of competence, you must review any output for clarity and errors. As is the case whenever you rely on the services of a third party, a thorough review of the work provided is always necessary. After all, the buck stops with you.
Speaking of ethics and best practices, you can learn all about your compliance obligations when using tools like ChatGPT in the post I wrote on that very topic.
As more legal software companies begin to incorporate GPT-powered technology into their platforms the reliability of its output will increase, and it’s only a matter of time before generative AI becomes commonplace in law firms. That’s why there’s no better time than now to learn about it, take it for a test drive, carefully incorporate it into your daily workflows, and ensure that you fully understand its strengths and limitations.
The future is here, and the advantages of generative AI are many — as long as you can resist the siren song of irresponsible use.
Nicole Black is a Rochester, New York attorney and Director of Business and Community Relations at MyCase, web-based law practice management software. She’s been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. She’s easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter at @nikiblack and she can be reached at [email protected].