← Horiz Logo

A Tech Adoption Guide for Lawyers

in partnership with Legal Tech Publishing

Artificial Intelligence

Artificial Intelligence Law is Here, Part Two

How might our existing legal framework adapt to the ever-increasing usage of AI?

Previously In Part One…

This is the second article in a three-part series on AI law. Previously, in part one, I discussed how AI is everywhere in the news, and soon, in the (legal) office. AI issues are not just for the tech attorneys. Robot cars are running over people and people are suing. I discussed why AI Law deserves to be its own area of law. Our discussion of AI Law turns now to the topic of AI bias and transparency, AI in the workplace and the protection of consumers. How might our existing legal framework adapt to the ever-increasing usage of AI?

Data In. Bias Out

In New York City, computer algorithms have been used to assign children to public schools, rate teachers, target buildings for fire inspections, and make policing decisions. This law requires the city to create a task force that provides recommendations on how information on agency automated decision systems may be shared with the public and how agencies may address instances where people are harmed by agency automated decision systems. The law defines the term “automated decision system” means computerized implementations of algorithms, including those derived from machine learning or other data processing or artificial intelligence techniques, which are used to make or assist in making decisions. The term “agency automated decision system” means an automated decision system used by an agency to make or assist in making decisions concerning rules, policies or actions implemented that impact the public. This is not without precedence. The use of algorithms by agencies have been challenged in court. In Matter of Lederman v. King, the New York Supreme Court found that use of the Value Added Modeling (VAM) algorithm to evaluate Lederman was impermissible because it was arbitrary and capricious.

While this article focuses on AI Law in the U.S., the EU General Data Protection Regulation arguably will affect U.S. multinational companies with certain nexus in the EU. As such, the regulation of AI by the GDPR is instructive. Article 22 of the GDPR states that “[t]he data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her” unless certain conditions are present. One of the ways to legally perform such algorithmic decisions under the GDPR if processing “is based on the data subject’s explicit consent.” Article 15 of the GDPR requires that “[t]he data subject shall have the right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, and, where that is the case, access to the personal data and the following information: the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” As the EU regulators begin to enforce the GDPR, U.S. companies may treat these requirements and the regulators’ interpretation as de-facto U.S. practice with respect to AI transparency and algorithmic bias.

AI in the Workplace

Among other things, the Fair Credit Reporting Act applies to background screening companies and data brokers. The FTC has entered into consent decrees with several data brokers what provides consumer profiles for employment screening purposes. While these are not AI usage cases per se, the FTC has expressed its intention to regulated AI (as discussed further below), and the prior FCRA enforcement actions could be analogized to misuse of consumer profiles by AI. The FCRA permits providing a consumer report for employment purposes, and requires, among other things, that Consumer Reporting Agencies implement reasonable procedures to ensure maximum possible accuracy of reports; consumers have access to review and dispute the accuracy of information; and consumers are provided notice if an adverse decision is made based on a consumer report. In addition to workplace discrimination using AI, there have also been workplace tort cases where people have been injured by robots as discussed earlier. Moreover, workplace privacy may be affected by increasing usage of AI. The law is well developed in the area of workplace privacy, but AI monitoring may increase risks due to its automated nature. A key challenge for black-box AI in the employment context and elsewhere is how to implement procedures to ensure accuracy and provide a review and dispute process under the FCRA, even though the AI data may be incomprehensible to a person. It should also be noted that in addition to substantive employment law and the FCRA, in about half the states, attorneys should not engage in conduct that the lawyer knows or reasonably should know is harassment or discrimination under a version of the ABA’s model rule 8.4(g), which arguably could extend to discriminatory usage of AI in hiring or other employment decisions if the attorney reasonably should have known that there are biases in their operations.

The FTC’s Protection of Consumers in the AI Age

Big data often goes hand in hand with AI, because modern AI requires large training sets to learn its decisions. The FTC has commented on big data: “companies should be mindful of the law when using big data analytics to make FCRA-covered eligibility determinations.” The FTC has also advised on potential AI collusion and anti-trust concerns. FTC has “identified for future discussion the risks associated with the use of data and computer algorithms in enabling new forms of collusion…” The FTC noted that “[t]his discussion assumes that the pricing algorithm is a program written by humans. Although computers equipped with artificial intelligence (AI) or machine learning could, in theory, make decisions that were not dictated or allowed for in the programming, these scenarios seem too speculative to consider at this time. Computer-determined pricing may be susceptible to coordination, just as human determined pricing can be. ” However, the question is not “if” it is possible for machines to automatically make such decisions, but “when”, and we may very well see FTC enforcement actions against AI collusion. Acting FTC Chairwoman Maureen Ohlhausen has also said the agency hopes to take a closer look at artificial intelligence “because it has a consumer protection element to it but also has a competition element to it.” On AI algorithmic bias she states: “[w]e do enforce laws that are to protect consumers from discrimination, and I think that’s appropriate for us to continue to think about and to continue to be vigilant …”.

FDA and AI Medical Devices

In addition to the FTC, other regulatory bodies have opined on AI. Scott Gottlieb, M.D., Commissioner of FDA discussed these issues in his recent remarks “Transforming FDA’s Approach to Digital Health.” To grapple with the large number of digital health-related apps, rather than try to certify each app, the FDA developed a pilot Software Pre-Cert Program that “aims to look first at the software developer and/or digital health technology developer, rather than primarily at the product, which is what we currently do for traditional medical devices.”Dr. Gottlieb notes that “[i]n time, AI might even be taught to explain itself to clinicians,” and this may open a new dimension to the medical malpractice bar and the protection of patients and consumers.

Part Three…

As we can see, state and foreign regulators are taking AI bias and the protection of consumers seriously, and federal regulators are beginning to grapple with these issues also. Tune in for part three of this article, in which I will discuss robot speech issues, regulatory guidance on use of robo-advisors by the SEC, AI speech issues, proposed AI laws before Congress, and finally general principals and policies for use of AI being adopted by industries.


Squire Patton Boggs partner Huu Nguyen is a deal lawyer with a strong technical background. He focuses his practice on commercial and corporate transactions in the technology and venture space. This article is based on talks Huu gave with colleagues at Squire Patton Boggs, including Zachary Adams, Corrine Irish, Michael Kelly, Franklin G. Monsour Jr. and Stephanie Niehaus, and he thanks them for all their support.