Should Law Firms Ban Or Embrace Generative AI?

Eliminating generative AI from your firm’s toolkit increases your firm’s risk of falling behind in efficiency, cost-effectiveness, and innovation.

Artificial intelligence making possible new computer technologie“We completely outlawed the use of ChatGPT in our firm,” a law firm partner recently told me. 

“How’s that working for you?” I asked. 

“Our lawyers are probably among the most adept generative AI users around,” he shrugged. 

Firms may monitor online traffic and prohibit lawyers from visiting specific domains on office computers. But lawyers can, and will, still experiment with generative AI on their personal devices and at home. 

Some may even use tools like ChatGPT for legal work without telling anyone. They could divulge confidential client information and make important decisions relying solely on AI analysis without you ever knowing.

Its unenforceability alone highlights the short-sightedness of a total ban on generative AI. Instead, law firms can benefit by carefully crafting a use policy to monitor and guide lawyers. Here’s why and how it works.

Bans Overlook The Many Benefits AI Offers

Banning generative AI outright may seem like a straightforward way to prevent the attendant ethical and legal issues. But generative AI can create content, draft and summarize documents, and perform tasks with stunning speed and impressive accuracy. Leveraging generative AI can save lawyers significant time and resources while helping to produce high-quality legal work at lower costs.

Eliminating generative AI from your firm’s toolkit increases your firm’s risk of falling behind in efficiency, cost-effectiveness, and innovation. A ban also ignores the reality that clients increasingly expect lawyers to use technology to better serve their needs. Seven in 10 in-house counsel recently surveyed expect their law firms to use cutting-edge technology, including generative AI tools. Waiting too long or failing to do so could harm your firm’s ability to remain relevant. 

Plus, bans drive AI use underground, in the dark, where it can be misused. It’s better to foster a culture that values and promotes the responsible use of AI. A good AI use policy helps lawyers understand how and when to use generative AI to minimize risk and maximize value. 

What makes a good AI use policy?

A comprehensive AI use policy can help provide much-needed guidance to lawyers and staff. It ensures legal professionals deploy and use AI solutions ethically and transparently and fosters an open environment for discussing ethical quandaries, expressing concerns, and reporting difficulties with AI systems. Such a policy may:

  • Clearly define the scope of use of generative AI applications in the firm. 
  • Establish requirements for data privacy and client confidentiality. 
  • Encourage discussions with clients about the use of AI in their cases. 
  • Require lawyer and staff training on generative AI best practices and ethical considerations.

As the AI field moves quickly, it makes sense to designate a “policy owner” to keep abreast of technological advancements and regularly update the policy as needed. Many resources are available to help lawyers and firms understand AI and how to use it ethically and responsibly.

A Well-Crafted AI Policy Is An Empowerment Tool

A well-crafted generative AI use policy ensures your firm’s legal professionals know the ethical guidelines surrounding AI usage, including the potential risks and consequences. Its guidance encourages legal professionals to develop and sharpen their technical skills and readily adapt to change — the type of skills and abilities needed to compete in the future.

Your policy will encourage compliance with legal and ethical requirements and foster a culture of competence and integrity. Trained and knowledgeable AI users are better equipped to make conscientious decisions grounded in innovative thinking. Embrace a policy-driven approach to harness the benefits of generative AI while maintaining the highest level of professionalism and adherence to ethical standards.


Olga MackOlga V. Mack is the VP at LexisNexis and CEO of Parley Pro, a next-generation contract management company that has pioneered online negotiation technology. Olga embraces legal innovation and had dedicated her career to improving and shaping the future of law. She is convinced that the legal profession will emerge even stronger, more resilient, and more inclusive than before by embracing technology. Olga is also an award-winning general counsel, operations professional, startup advisor, public speaker, adjunct professor, and entrepreneur. She founded the Women Serve on Boards movement that advocates for women to participate on corporate boards of Fortune 500 companies. She authored Get on Board: Earning Your Ticket to a Corporate Board SeatFundamentals of Smart Contract Security, and  Blockchain Value: Transforming Business Models, Society, and Communities. She is working on Visual IQ for Lawyers, her next book (ABA 2023). You can follow Olga on Twitter @olgavmack.