Cognifying Legal Education

If we can help students understand that technology, and specifically AI, can create a much more streamlined, efficacious means of connecting lawyers to consumers of legal services, and reorient or recalibrate what it means to provide legal services by lawyers, then that’s an enormous benefit for us as legal educators in educating our students to the value and capacity of law to provide access to justice.

Daniel B. Rodriguez, DeanNorthwestern Pritzker School of Law

This is an age of dizzying acceleration in the development of artificial intelligence. As AI becomes both more commonplace and increasingly powerful, the legal profession and the law itself must learn to interact with and adapt to these technologies: autonomous vehicles, smart contracts, the internet of things, robot policing and warfare, the list is seemingly endless. The proliferation of AI demands updated and new regulations as well as new modes of legal practice.

With the explosion of artificial intelligence and related cutting-edge technologies, law schools face huge opportunities to create graduates who efficiently and confidently rely on technology to better serve their clients and run more efficient practices. Law schools are looking for innovators to help draw the path forward to implementing these technologies and AI capabilities into curriculum to create practice-ready graduates and to improve access to justice.

AI and other emerging technologies have already transformed process work for practicing lawyers by automating repetitive or commodified tasks, thereby, in the most sanguine view, giving the humans more time to “think like lawyers.” There is not much in the way of analogous process work in the law school curriculum that similarly awaits automation. But law schools are under increasing pressure to meet the demands of a rapidly evolving profession and employment market. Students entering the current and future legal employment market must understand the implications and impact of AI and related technologies on the practice of law and be prepared to oversee their implementation and the resulting processes.

It has been widely recognized both inside and outside the academy that law schools often struggle to find ways to produce graduates who are prepared for the technology driven 21st century legal market. Richard Susskind has expressed the fear that “we are training young lawyers to become 20th-​century lawyers and not 21st-​century lawyers.” Specifically, law schools are continuing to turn out graduates prepared to become “bespoke, face-​to-​face, consultative advisers who specialize in the black-​letter law of individual jurisdictions and who charge by the hour” rather than the “flexible, team-​based, technologically-​sophisticated, commercially astute, hybrid professionals” the market demands.

Schools cannot help but recognize that change is underway. A recent survey of attendees at the Association of American Law Schools conference by Thomson Reuters found a significant disconnect between the status quo and schools’ aspirations regarding implementing new legal solutions and technology into their curricula. Whereas barely one-fifth of surveyed schools reported that they were “already incorporating” new legal solutions and technology into their course workflow, two-thirds of schools were “inclined” to do so. A plurality (40%) characterized themselves as “very inclined.” (Less than 2% were not inclined.) When asked to identify their motivations for adding new components or technology to classes, the most commonly cited reason—by far—was “a drive to expose students to the same tools practicing attorneys use.” Innovation in legal education is driven, above all, by the recognition that law schools must prepare their graduates for a legal employment market and a legal services delivery model undergoing a radical reconfiguration.

“Who knows what the lawyer world will look like in five to 10 years?” says Kenton Brice, Director of Technology Innovation at Oklahoma University Law School. Brice sees his role and that of his peers as “already thinking through what is the day-to-day going to look like when lawyers have bots in the office.” Drawing on the ideas expressed in Kevin Kelly’s Inevitable, Brice notes that “where we once electrified things, now we’re cognifying things.”

So if–for example–you have a form data bank, how do we cognify that data bank to start thinking for itself? It’s going to happen. It’s this momentum we have going. As the momentum builds and builds, we’ll be teaching it.

“Yes, we need ‘law and [tech]’ classes that explore the law of emerging technologies. But where we’re really lacking is classes that train students in the business of law and operations, get them to think like entrepreneurs, and have them improve processes, gather data, and use technology to transform the delivery of legal services.”

Dan LinnaMSU Professor of Law in Residence
Director of LegalRnD

AI as Subject and Tool

As of the 2018 Spring Semester, only roughly 10% of ABA-accredited law schools offer at least one course explicitly concerning artificial intelligence. It is a safe bet that this proportion will grow; the only question is how rapidly.

This graphic below depicts the publicly available text of the catalogue descriptions of all these current accredited law school courses (excluding the obvious terms, such as (“law,” “AI” and the like):

According to Daniel Linna, Director of LegalRnD – The Center for Legal Services Innovation and Professor of Law in Residence at Michigan State University Law School, it is crucial to distinguish between those courses that aim to transform the delivery of legal services at the intersection of AI and legal reasoning, and classes that teach the law of artificial intelligence and robotics. Professor Linna, who will teach both Artificial Intelligence and Legal Reasoning and The Law of Artificial Intelligence and Robotics at Northwestern Pritzer School of Law in the 2018-2019 academic year, believes that “we need ‘law and [tech]’ classes that explore the law of emerging technologies. But where we’re really lacking is classes that train students in the business of law and operations, get them to think like entrepreneurs, and have them improve processes, gather data, and use technology.”

In addition to the emergence of AI as a topic, there is the matter of AI as a pedagogical tool. We asked an array of legal academics for their impressions on AI as currently implemented in law school classrooms in order to teach the doctrinal curriculum.

They shared a consensus that AI had yet to meaningfully arrive as a teaching tool. Professor Jonathan Askin of Brooklyn Law School spoke for all of his peers when he noted, “Meaningful implementations, by and large, still elude us.”

“AI is so new to most students that for them to even understand the different levels of AI is a starting point. [I]t’s my job to honestly investigate what tools are out there, find them, and then provide them to our students.”

KENTON BRICE, OU LAWDirector of Technology Innovation

Pioneering Law School Programs

Changing the law school model is hard, culturally and because of institutional guild mentality and inertia. While the skills required for the law grads to thrive demand something beyond the conventional law school curriculum, the deeply entrenched cultural constraints of legal academia thwart much in the way of internal systemic reform. (In the words of digital technology writer and consultant Euan Semple, “The biggest challenge is that the transition to a new world is in the hands of the old. Those who can bring themselves to use the phrase ‘Digital Transformation’ are invariably those who least understand, or would like, its implications.”)

As William Gibson famously observed, “The future is already here, it’s just not evenly distributed.” A few pioneering law schools have indeed embraced innovation–including AI–and transformed their students’ career options. The role of AI and other emerging technologies in law schools is still nascent, although a handful of institutions are pioneering the integration of new technologies as a practical subject matter and, in limited ways, as a pedagogical tool. One indicator of an institutional commitment to integrating emerging technologies–certainly not dispositive, but telling nonetheless–is the “innovation centers” being launched by law schools around the country.

At Michigan State University Law’s LegalRnD, Process is Paramount

MSU Law’s LegalRnD is a mission-based research center focused on innovation in legal services. The mission? “[I]nnovation through legal research and development will bring the law to everyone.” Vital to this mission is, of course, leveraging AI and related cutting-edge technologies, although at LegalRnD everything flows from a commitment to a set of “core disciplines.” These disciplines include project management, process improvement, data analytics, and an understanding the business of law. According to Jordan Galvin, an MSU Law graduate and current LegalRnD Innovation Counsel, “embracing these disciplines allows students in the LegalRnD program to work with outside partners, including law firms, to build solutions using expert systems or workflow automation.” (Expert systems are an example of rules-based AI that convert the knowledge of an expert in a specific subject to software code.)

While there is tremendous buzz and hype around data-driven AI, Professor Linna insists it would be a mistake to overlook the importance of rules-based AI, especially in the legal industry.

In any event, Linna cautions, “we cannot jump directly to technology ‘solutions.’ We need to take a ‘people, process, data, technology’ approach. We need to create learning organizations and empower everyone, especially those closest to the customers, to continuously improve and innovate.”

In this view, there is still a gross excess of bespoke legal work. According to Linna, “We’re reinventing the wheel time and time again. Not only is this inefficient, it results in quality that is far less than what it could be.” If sufficient attention is paid to best practices and standards, as well as to what clients (or as LegalRnD folks put it, ‘customers’) truly value, that sets the stage for actually leveraging AI:

Once process improvement has begun and continuous improvement is part of an organization’s culture, we find many opportunities to exponentially increase efficiency and quality and improve outcomes through document automation, workflow automation, rules-based expert systems, and other readily available, basic technologies. Done properly, getting control of processes also means creating legal-services delivery metrics and capturing data from these processes. All of this not only produces tremendous returns, it also helps position organizations to identify opportunities for data analytics and data-driven artificial intelligence.

Of course, MSU Law and LegalRnD also teach students about data-driven artificial intelligence, which is having a tremendous impact with technology-assisted review in the context of eDiscovery and diligence. For example, MSU Law 2L Justin Evans extols his professors’ foresight in incorporating AI and other cutting-edge technologies within the traditional doctrinal curriculum: “My contracts professor did not just talk about contracts, but also about smart contracts in his classes because he sees in the future this is going to be something that clients will be want to explore. [My corporations professor] is talking about using artificial intelligence doing due diligence because there’s a number of people who would love to be M&A attorneys. So, I’m now I’m taking an M&A class, and we’re talking about using artificial intelligence to help speed up the due diligence process and catch any potential risk we’re missing.”

Justin Evans has taken his interest in the intersection of AI and transactional law a step farther than most of his peers. In a forthcoming paper in Pepperdine Law’s Journal of Business, Entrepreneurship and the Law, Evans explores such questions as “whether the use of AI in contracting can reduce risk by comparing the current smarter contract with existing smarter contracts on the blockchain?” and “Could AI help retain business relationships by allowing user inputs that track performance, fluctuations, and extending the time for performance?”

Another key element of LegalRnD’s approach is teaming with outside partners. In Professor Linna’s capstone classes, students have collaborated with top-tier law firms such as Perkins Coie, Akerman, and Davis Wright Tremaine, as well as organizations like Michigan Legal Help, in building rules-based expert systems. According to Innovation Counsel Jordan Galvin, these partnerships are a win-win:

Instead of hiring a development team or an alternative legal service provider, LegalRnD’s partners can use law schools as labs for innovation, they tap into that capital that we have here for free and they benefit just beyond doing something good–giving back to the law school community and helping the next generation of lawyers–they really get actual value added to their operations.

“AI is not your enemy. If you’re a solo or small or medium firm, AI is leveling the playing field.”

EMILEE CROWTHER, OU LAW 3L

Cardozo Law Tech Startup Clinic: Doing Well By Doing Good (But With Lawyerly Skepticism)

Since 2014, Yeshiva University’s Benjamin N. Cardozo School of Law has operated its Tech Startup Clinic to provide a range of services to new technology-based companies in New York City. In the words of Aaron Wright, Cardozo Law professor and founder/director of the Clinic, “We know that software is eating the world, and we know that there’s a lot of activity in New York around startups. New York is now clearly the number two hub for start-up activity outside of the Bay Area, and so we want to make sure that our students are appropriately trained to work with technology clients the moment that they walk out of our doors.” The Clinic aims to create an experience that takes students through the life cycle of an actual area technology startup, from formation through IPO, and beyond.

According to Professor Wright, “You really can’t be an exceptional lawyer at this point in time unless you really understand, not only the law, but also technology. If students want to work on the cutting edge of things like AI, privacy, blockchain technology et cetera, you really have to begin to really understand how these things work. So the Clinic really gives students kind of a front row seat to that.” Since it was founded, the Clinic has represented about 160 clients. Clients are selected using a venture capital model-like approach. “We’re really picky,” Professor Wright notes.

For the 2018 Spring semester, the Clinic has one AI-based client as well as a former student now interning at Clarifai, one of the premier AI companies in New York. Based on the experiences of his students, as well as his own deep and extensive technological background (Wright has helped build an open source search engine and a smart contracting platform, and has a forthcoming book on blockchain technology), he has a well-developed idea of the types of questions AI companies are seeking to solve in for the legal profession. And it turns out, Professor Wright characterizes himself as something of an AI-skeptic: “I think that, outside of maybe document discovery, I just don’t see how AI can really fulfill [the lawyers’ role], because there’s just not enough data. So you just don’t actually have enough to feed into the machine. “

From Professor Wright’s perspective, in order to automate legal functions in a transactional setting, where one is dealing with “the most arcane and context-specific language that humans have ever created,” two things are required: “A.) a whole bunch of folks that are competent enough to understand these agreements to train these systems to make them effective, and B.) you can’t have 98% accuracy, you need 100% accuracy for lawyers to really adopt it in any type of way so, I think it’s going to be a long slog, personally.”

Professor Wright believes a better approach would be to develop a structured, semantic domain-specific language that lawyers can apply to transactional agreements, so that one can actually define parameters of what the machine needs to know and provide context. In other words, the premise of some current AI-systems is flawed: instead of the algorithms trying to make sense of the data, the data should be created with the algorithms in mind.

Nevertheless, in addition to the challenge, Professor Wright sees huge opportunities for his students to do well by doing good:

In the medical profession we’ve seen a wealth of medical devices that are created by doctors, in combination with engineers. Those companies improve the lives of their patients and they also do pretty well for themselves. We think that there’s going to be a similar inflection point when it comes to legal technology, with lawyers working with technologists to decrease the cost, improve the quality, and increase basic access to legal services.

“The AI coming our way, that is coming mainstream, is not going to replace lawyers. It is going to enhance the lawyers playing the game, and the lawyers not willing to learn and grow are the ones who are going to be left behind.”

RYAN DOBBS, OU LAW 1L

Oklahoma: Leveling The Playing Field

Nearly three-quarters of University of Oklahoma College of Law graduates who enter private practice join firms with 25 or fewer attorneys. So the OU Law Center for Technology and Innovation is the perfect place to turn for a sense of how AI and other emerging technologies are being deployed specifically meet the challenges of small firm practice. The OU Center is built around three core elements: (1) the common platform of the iPad, given free to incoming students; (2) a digital training curriculum; and (3) a state-of-the-art space (Inasmuch Foundation Collaborative Learning Center).

Technologists–in the legal space and elsewhere–are prone to observe that “When AI works, you stop thinking of it as AI. It’s just software.” Kenton Brice, the Director of Technology Innovation at OU Law, shares this view and points to the law school library as one venue where AI tools have been integrated for some time: “In our law library we use–like most law schools–Westlaw and other research tools, Every single one of these platforms to some degree or another is using a form of AI implementing legal research. And so the law library through our curriculum, we’re teaching that. And that’s in our 1L legal research courses. Everyone will have to take legal research as part of the legal research and writing program. And they’re getting shown how Westlaw’s research recommendations work. “ In fact, Brice see librarians as crucial to the successful integration of AI and other cutting edge technologies into law schools: “Librarians need to be advocates of the technology, like they would be anything else, and really use it as an education and teaching tool.”

Beyond the context of legal research, Brice himself as an advocate of technologies that will help shape more practice-ready graduates generally, particularly in the smaller scale practices that are the typical destinations for OU Law grads: “Showing them the tools that are out there that exist that they can start leveraging in their firms. [W]e do focus a lot on solo and small firm practitioners with this technology. We want solo and small firms to use technology to level the playing field in the competitive market out there.” The goal is a graduate with the technological competence and savvy to provide immediate value to her future employer. In the words of Emilee Crowther, an OU Law 3L, “ Some [employers] are very technologically advanced and really like to be on the cutting edge of things. But what’s really great, is that for mid-sized to small firms, having graduates like us coming out and having all this information on technology is going to be really great. We’re going to be a great asset to our companies.“

Skills-based classes at OU Law present a particularly rich opportunity to integrate cutting edge technologies: “We integrate with our skills-based courses. So pre-trial ligation techniques. Trial techniques courses, negotiation, we had a transaction law practicum where we’re showing them the current smart contract software and contract analytic software out there. These are just places we already plug in to faculties’ own courses and they let us come and augment the curricula with how lawyers should be or could be working in these areas with technology.”

On concrete example of an AI-fueled technology being mastered by OU Law students is the chatbot, which is increasingly becoming a viable client service channel. According to Zach Williams, an OU Law 2L, “We’re being taught a lot about chatbots. Especially with focus of folks who then go and hang their own shingle or go and work in a small firm, it’s a great way to manage your client intake and all of these questions and flowcharts. There’s a number of different services we’ve been shown how to use. But it puts you at a leg up because you’re able to process those clients, and see and screen them, or maybe pass them on to somebody else at the practice area that you don’t do. So, I can maximize my time efficiently and intaking client instead of taking hours to do it.”

OU Law’s emphasis on the practical, concrete benefits of leveraging this category of applied AI is, according to Williams, a decisive factor in increasing adoption: “You can’t browbeat people into using technology because they’re not going to use it. … But if you show them how it can help their life literally right now, not three years in practice, not on a far away trial advocacy thing, then I think you reach people.”

“When AI succeeds, you stop thinking about it as AI. You just call it technology. It’s not AI. It’s just my device.”

HUU NGUYENPartner, Squire Patton Boggs

AI & Education For Practicing Lawyers

Mandatory Continuing Legal Education is how the profession acknowledges that the law does not stand still. CLE reconciles the facts that the practice of law is both rapidly evolving and self-regulating. What implications for CLE does this explosion of AI and related technologies have? Huu Nguyen, a corporate partner at Squire Patton Boggs, is also a pioneer in teaching AI-themed CLE courses. The courses taught by Nguyen include one introducing the legal and technology issues in the use of AI for legal research and knowledge management, and another concerning AI, cybersecurity, and the protection of personally identifiable information.

When asked his views on emerging ‘hot topics’ in AI-related CLE content, Nguyen identifies two broad themes: the protection of personally identifiable information and the pitfalls of data bias. Concerning the former, the issues are not always “obvious.” Nguyen notes that much of the data used to train AI is public data. So the status of trade secrets is absent, and lawyers are forced to think of novel ways of protecting data. An example scenario explored by Nguyen in his class might run along the lines of, “I can’t tell you hey, I got this data set. Well, it’s not public, but it’s public information because it’s just from the phone book or something. But, what you do with it may be special. And then, also that’s between you and I contractually and you keep it a secret. So it doesn’t have to be an actual trade secret. It can be a covenant.”

The issue of “data bias” is another one of growing interest to AI-themed CLE programs nationwide. Nguyen notes that with “a lot of data, you might get bias. So, gender bias, racial bias. For example, I’m going to give you a loan or not based on all these data points. What happens when it’s the zip code that’s mostly African Americans? In that zip code, you don’t get this loan.” However, the (in this example) mortgagee cannot simply abdicate responsibility for the loan decision to the algorithm. In the case of biased outcomes or disparate impact, one cannot blame the AI. However, according to Nguyen, “It’s not the AI. It’s like saying my phone made the phone call. No, you made the phone call. But you just happen to push a button and tell the phone to make the call. [If the algorithm produces biased results], well, you probably should figure out what it’s doing.”

It’s been over five years since the ABA created an ethical duty of technology competence. In the years since, a majority of states have adopted this duty, yet most lawyers still have little understanding of the scope of this duty and what it means for the day-to-day practice of law. Courts and ethics panels have provided little guidance, and many lawyers still lack basic competence in legal technology. At some point, AI will inevitably manifest within the defined parameters of the duty of competence, but Nguyen believes that day is still far off. For now, attorneys’ use (or understanding) of AI remains largely a business matter. “In the transaction space, we’re looking into e-diligence. So, given a bunch of documents, I need to know if all the change of control provisions says what they are meant to say. I could hire an associate to spend the next week reading a thousand documents, or I could push a button. So, I don’t think it’s an ethical duty, but I think it’s a business decision. Would you want to charge your clients x dollars or y dollars? But of course, you might be getting a different result so you have to have somebody oversee it.”

If Nguyen had one overarching message for the legal community, it might be “there is nothing new under the sun.” Nguyen counsels a sort of optimistic pragmatism in reminding his current and future fellow lawyers that “we just have to apply analogies most of the time. For example, with the advent of the internet, there were some new concepts, such as cyber intrusion, but that was based on the old common law ‘intrusion upon property.’ Technology is not changing the laws. We just have to apply what we know. This is still a precedent-based profession. Society’s not going to implode or be so disrupted. We have the resources and intellectual infrastructure to adapt to the evolution of technology.”

Leave your email below and we'll send you free updates on Law2020.