← Horiz Logo

A Tech Adoption Guide for Lawyers

in partnership with Legal Tech Publishing

Artificial Intelligence

Artificial Intelligence Law is Here, Part One

Law is being developed now, in order to set the rules of the road for the usage of AI. And we as lawyers should recognize it as a specific discipline.

The author’s daughter & her robot

In the early to mid-90’s while my friends were getting into Indie Rock, I was hacking away at robots and getting them to learn to map a room. A computer science graduate student, I programmed LISP algorithms for parsing nursing records in order to predict intervention codes. I was no less a nerd (or to put it a better way, a technology enthusiast) in law school, when I wrote about how natural language processing can improve legal research tools. I didn’t put much thought, either as a computer scientist or law student to whether artificial intelligence (AI) should be regulated. Frankly, we were in such the early days of the technology, that AI regulations seemed like science fiction a la Isaac Asimov’s three laws of robotics.

Now, however, after more than two decades, we are at the point where regulators are actually regulating AI. I won’t try to define AI because there probably are as many definitions as there are types of AI. But, I think the reader knows what AI is when she sees it. We’ve used Google’s search engines for decades, we’ve interacted with Alexa on a daily basis. Often times, we use AI without even knowing it, such as sending texts to customer help chatbots. My young daughter codes her robot Coji for fun, and I think nothing of it. The recent trend in interest in AI by lawyers is based not so much on a lack of knowledge about what AI is, but perhaps on anxiety about how it may affect us as lawyers and society in general. I don’t think lawyers will lose their jobs in droves to robots. But, nonetheless, the practice of law will change. Below is the first part in a three part discussion on AI Law.

What is Artificial Intelligence Law?

Artificial Intelligence Law is the field of law dealing with the rights and liability that arises from the use of AI and the AI itself. As lawyers, we know the power of words and labels. Cyber law at the dawn of the internet age brought together many strands of the law, such as intellectual property law, the laws of trespass upon property, privacy and speech based torts, and commercial law into one discipline, and that discipline has evolved over time to help businesses and consumers navigate the rules of cyber-space. Similarly, Artificial Intelligence Law is being developed now, in order to set the rules of the road for the usage of AI. And we as lawyers should recognize it as a specific discipline.

While knowing about the technical intricacies of AI is not necessary for an AI lawyer, one should know the potential impact AI can have on businesses, consumers and society. One aspect of modern AI, especially the use of deep neural networks, is the black box nature of the technology. For example, a neural network may be stored as large matrices of numbers. Input is fed into an algorithm, and the AI is trained to provide certain output. What rules or correlations the AI makes are often times a mystery. The correlations discovered by the AI may be based on impermissible categories, such as race or gender, or may be based on relations that have disparate impacts. Another aspect of the technology is that AIs are becoming more autonomous. Products using AI have been around for a while, but they are increasingly given more abilities to drive, fly, or speak in ways the designers did not specifically know or anticipate. For example, the National Highway and Transportation Safety Administration (NHTSA)’s updated guidance[1] on autonomous vehicles recognizes the six levels of AVs, with increasing autonomy form levels 1 through 6. The lack of transparency and the shifting of intention to devices will trigger for the issue spotter in us the issue of how to prove if decisions are made reasonably or based on impermissible factors and if an actual person has knowledge, was reckless or negligent in using the AI.

These are all issues first-year laws students grapple with. So too are the regulators, judges and practitioners grappling with these issues under the legal framework governing AI as it currently exists. In this article, I discuss several areas of law affected by AI usage including torts, workplace issues, and bias, with a focus on the themes of transparency and liability, robot speech issues, and existing and proposed AI laws and regulatory guidance. This is not a comprehensive list of the U.S. law affecting AI[2], but it should give the reader a sense of the landscape of AI laws affecting US consumers and businesses.

The AI Tortfeasor

Under a traditional framework, when someone is injured using a product like a robot, product liability law determines when and to whom liability attaches. Product liability is based on state common and statutory law, including negligence (e.g., duty, breach, causation, damages), breach of warranty (e.g., merchantability or fitness for a particular purpose) or strict liability. We are well familiar with the Restatement R2d Torts § 402A (e.g., design defect, manufacturing defect, failure to warn). The courts are seeing cases involving traditional products liability and negligence arising from AI usage, including by and in vehicles, and use of workplace robots.

In Cruz v. Talmadge, Calvary Coach, et al. plaintiffs were injured when a bus in which they were riding struck an overpass. They alleged that the driver of the bus was following directions provided by a Garmin and/or TomTom GPS system, arguably semi-autonomous AI systems, sought to impose liability on the GPS manufacturers under traditional theories of negligence, breach of warranty, and strict liability, and asserted facts to prove foreseeability and a feasible alternative design.

In Nilsson v. General Motors, LLC, plaintiff claims that the autonomous vehicle with its back-up driver veered into his land and knock him and his motorcycle to the ground; plaintiff alleges General Motors, LLC through its vehicle, and not the driver was negligent; but plaintiff does not claim that the product was defective. Defendant admits that the vehicle is required to use reasonable care in driving. In this case, the more fully autonomous vehicle (and its owner) are alleged to be negligent, and raises the question of what is the standard of care of a reasonable person in this case.

AI is also hurting people in the workplace. In Holbrook v. Prodomax Automation Ltd, et. al. an auto parts worker was killed by a robot in her workplace; plaintiff claims robot should never have entered the section in which the decedent was working; and plaintiff alleges defective design or negligent design, defective manufacture, breach of the implied warranty, failure to warn, negligence, and res ipsa loquitur. Absent from the list of defendants in this case (and sometimes in other workplace tort cases) is the injured party’s employer likely because the state workers’ compensation law bars such claims in certain circumstances. Such liability shifting or shielding laws promotes a public policy, where recovery is covered elsewhere. We may see insurance requirements on manufacturers or users of AI that may similarly shift such liabilities, in order to protect and promote innovation.

Traditional tort theories apply well to semi-autonomous AI, but as AI becomes more autonomous courts may try to account for tort-based elements based on imputing a duty on to the AI themselves, or impute intention and knowledge by the AI onto its owners.

State Regulations of Autonomous Vehicles

In addition to courts grappling with AI torts, as of the writing of this article, twenty-three states have enacted autonomous vehicle (AV) legislation to account for liabilities of self-driving cars. The legislation varies, but have common themes. The National Conference of State Legislatures has a wonderful summary of the current state of AV laws.[3] Some AV legislations allow testing of AV under safety standards identified by legislation, or limits certain liabilities (e.g., limits liability of manufacturers for damages from modifications of the AV by a third party), or change insurance requirements. Governors in 10 states have also issued executive orders on AVs covering similar topics. In essence, these laws and actions seek to find the right balance of liability for more autonomous AI and promoting innovation, while adopting some of the concepts of the common law of torts.


Part Two…

Many open questions remain related to how to regulate AI, such as how we might create laws and policies that protects the public while promoting innovation. Tune in for part two of this series, in which I will discuss robot speech issues, AI bias and transparency, regulatory guidance on use of robo-advisors by the SEC, proposed AI laws before Congress and more.

Squire Patton Boggs partner Huu Nguyen is a deal lawyer with a strong technical background. He focuses his practice on commercial and corporate transactions in the technology and venture space. This article is based on talks Huu gave with colleagues at Squire Patton Boggs, including Zachary Adams, Corrine Irish, Michael Kelly, Franklin G. Monsour Jr. and Stephanie Niehaus, and he thanks them for all their support.

[1] https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety
[2] For examples of other laws that touch AI, see the UETA and the E-sign Act.
[3] http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx; see also National Highway and Transportation Safety Administration (NHTSA)’s updated guidance https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety