← Horiz Logo

A Tech Adoption Guide for Lawyers

in partnership with Legal Tech Publishing

AI, Artificial Intelligence

Towards an AI Policy and AI Incident Response Plan for Businesses (Part III)

The final article in a three-part series on the motivation for AI policy and incidence plans for businesses.

Previously, we discussed the motivation for a business to adopt an AI policy and incidence response plan as a way to mitigate risks. Then, in the second part of our series, we discussed common themes and issues that might be covered by an AI policy.  Now, in our third article, we will be discussing an AI incident response plan.

Scope of the Plan

Certain industries – such as aviation and manufacturing – have historically used robotics and automated systems, and therefore may have established corporate and legal policies already in place (whether required by law or as sound business practice). However, for most industries, AI is a relatively new player to the business equation. Below are certain suggestions for an AI policy that would provide the general principles for adopting AI’s usage. We also provide suggestions for the principles for an AI incident response plan to take action when an AI goes wrong.

The incident response plan should identify what types of incidents should be handled, who should be responsible based on the type and severity, and the extent of incident management. It may also include crisis communications plan where appropriate.  Additionally, the plan should define the team members, their defined roles and responsibilities for the types and severity of incidents, and how the responses should be escalated.

The types of AI incidents covered by the plan will, naturally, depend on the industry in which the company operates, and can anticipate, as appropriate, the following scenarios:

  • Instances where the AI policy is violated;
  • Accidents caused by AI;
  • An AI is a witness to a crime;
  • An automated system that is “pranked” or hacked by third parties to operate in an embarrassing way;
  • An AI is hacked by a malicious intruder.

Team

As with teams responsible for cyber incidents or other corporate crises, the AI team may include senior leadership and subject matter experts. Subject matter experts may include IT security, human resources, the product team, legal counsel, a communications team for internal and external communications, and possibly an outside public relations (“PR”) team, each of whom would be involved depending on the circumstances. A person should also be designated to record the AI incident, assessments made relating to the incident, and resolution of the incident.

Severity

Not every AI incident is a crisis, and each incident should be handled based on its severity. Below are suggested severity levels and should be adapted to the company’s circumstances and evaluated on a case by case basis:

Low Severity Incidents. AI incidents may merely be bugs in technology or newly discovered defects that do not otherwise cause harm. Oftentimes, for non-severe AI incidents, the IT or products teams are able to respond and should own these incidences. When AI incidents are embarrassing but are not otherwise unlawful or damaging to the company, the PR team may respond.

Medium Severity Incidents. Moderate AI incidents, such as allegations of routine violations of law (e.g., failures to send notices or get consent under applicable laws), or a systematic bug that does not threaten the core operations, health, or safety of the business might be dealt with by teams that are assigned specifically for the type of AI incidents or dealt with by the core AI team in the ordinary course.  

Medium severity incidents are AI incidents that are not otherwise routine matters and are not of the severe types discussed below. For example, there is a fine line between bugs in an AI system that are low severity incidents, which are more appropriately handled by a product team, and systematic problems with the design and data of the systems that could potentially cause more risk to the company. The latter case would likely need to involve a more fulsome AI incident team. A product team would want to fix the bugs when they see them; however, if there are repeated problems with bias or data quality from vendors, then this issue should be escalated to a more complete AI incident team and legal.

High Severity Incidents. Severe incidents are non-routine legal claims, such as a claim of a major accident or a class action suit(e.g., a “bet the company” claim). They may also threaten the health and safety of the company or the core business function of the company. In such instances, the full AI team should be involved and should work in coordination with other experts to escalate the incident as soon as practicable. Examples of times when this is necessary include when a physical AI causes physical injuries, is suspected of having a defect that could cause physical injuries, or miss-operates and fails to process transactions or deny customers services which are core business functions.

A severe AI incident could become a corporate crisis, and the AI team should be part of the company’s crisis management team. But even for high severity incidents, the AI team may not “own” the response and might operate with other business functions that do “own” the response. For example, one type of such crisis is a cyber incidence that severely affects the company’s operations.  In this case, the AI team should work closely with the cyber response team.

In all levels of severity, if the incident exposes an injury to persons, data, or property, and/or there may be a violation of law, the legal team should work closely as part of the AI incident response team. Whether the incident is handled by other parts of the business or the core AI team, any takeaways should be implemented into improving the company’s overall AI policy and incident response plan.

Detecting an AI incident

Similar to cyber incidents or other crises, AI incidents can occur suddenly and affect many parts of the business, so early detection is crucial. Various parties can identify an incident, including:

  • legal department (e.g., a court filing by plaintiffs that they have been injured by the AI);
  • management (e.g., a third party partner could inform the company about the AI incident); 
  • employees (e.g., the employee reports an AI accident or defects);
  • IT or product development (e.g., an unusually high number of defects by the AI);
  • Monitoring of social media or of the press (e.g., an article is posted that an AI allegedly commits bias against a certain group); or
  • Automated or semi-automated monitoring of the AI through logs and tests (e.g., key words are detected that shows customer responses to chat bots are unusually angry, or logs of decisions to deny services are detected as alarmingly high, and/or human employees performing randomized screening of the tests).

Assessing and Managing the AI Incident

When responding to an AI incident, the dedicated team of experts must continuously assess (a) the severity of the incident, (b) what is the cause of the AI incident, (c) how to assert control back over the function provided by the AI (whether by a properly functioning AI, a different vendor’s systems, or human interactions), and (d) how to minimize the risk caused by the AI incident.

An AI incident can be caused by a number of factors, such as:

  • a malicious actor (e.g., a disgruntled current or former employee);
  • an external threat (e.g., a hack);
  • a vendor’s failure;
  • poor training data; or
  • simply a design flaw of the AI.

An important aspect of determining the cause is separating facts from suspicion. For example, a miss-operation of the AI may be assumed to be a bug, but may in fact be a hack. Data forensics would need to determine this fact as appropriate. Fact gathering is crucial.

Asserting control over the situation can range from a straight-forward fix and testing of certain known bugs, or more radically, reverting back to using humans for the function. Depending on the severity level and circumstance, external communications may need to be issued to the press and those affected. Compensation may also be offered to third parties for any harm or inconvenience experienced by those impacted parties.

Executing the AI incident response plan and managing the AI incident are cross-disciplinary activities. While a core team of experts familiar with how the AI operates within the company will act as the “first responders,” other disciplines, such as those with deep knowledge in cybersecurity, employment and HR, litigation, and potentially, criminal prosecution, may round out the team.

The AI assessment begins when the AI incident is first known and should continue until the incident is resolved. A low severity incident may escalate to a higher severity incident over time. As part of the assessment, the company should convene the appropriate team to manage the particular level and type of AI incident.  And as part of the incident assessment, the AI incident should be recorded, and best practices from that particular incident should be used to improve the existing AI policy and incident response plan.

Engaging Lawyers

The company should engage its in-house attorneys, and may also consider engaging outside counsel, on issues involving AI.  Attorney-client communications with counsel should be protected by privilege, and lawyers should be mindful of privilege issues when engaging third party IT consultants to analyze AI incidents. Only lawyers should state legal conclusions to law enforcement or third parties, such as whether there was an AI defect or a violation of law. Outside law firms should appoint a point person to coordinate with the client, law enforcement, AI technical consultants, and PR firms appropriately, as well as manage the project of responding to the AI incident.

Lawyers may need to issue a litigation hold for some AI incidents, and may need to begin preparing for litigation. In the instance where an AI goes awry due to a human agent, it might be prudent for a lawyer to send a cease-and-desist letter to that person. Lawyers may also need to notify law enforcement and/or regulatory agencies of an accident or crime involving the company’s AI. Moreover, to the extent required by law, lawyers may need to notify persons or businesses affected by the AI misconduct.

Engaging Outside AI Consultants

While a company may have internal experts on AI, retaining an outside AI expert may be helpful for various reasons, including having a second objective opinion on the AI misconduct, ensuring attorney-client privilege when the AI consultant is working with and directed by attorneys, and in situations where the AI incident is caused by internal company personnel.

Law Enforcement

Based upon the determination of the company’s legal team, law enforcement (such as police, state attorneys general, or federal law enforcement) may need to be involved if the AI incident arises from a crime committed by others. Contact information for the varying levels of law enforcement should be included in the AI incident response plan.

Contrastingly, law enforcement may be called by third parties who have been harmed by the company’s AI. Law enforcement may also be called if the AI is a witness in an investigation, such as when the AI records potential evidence. In such cases, the legal group should respond to law enforcement in order to protect the company’s rights. In the above, the legal group’s representation of the company may include communicating with law enforcement and witnesses and reviewing documents and responding to requests from law enforcement.

Public Relations

The company’s public relations group should manage all interactions – through one spokesperson – with the press and the general public relating to the AI incident, and act in direct coordination with the other appointed AI team members, including legal. While it is advisable to have a crisis communications plan and pre-approved messages in place before one needs it, the appropriate level of response should be assessed and addressed on a case-by-case basis. The PR team may also monitor social media for information related to the incident.

Third Party Services, Contact List and Resources

Resources related to the AI, such as specifications, lists of stakeholders, and contacts of those with know-how about the AI, should be maintained and updated so that they can be accessed easily during an incident response. Technology is interconnected between many third party services and data. These could include one or a combination of payment vendors, outsource providers, cloud providers, machine learning as a service providers, messaging providers, domain name registrars, and data providers, among many more. The AI incident team must be able to contact the various third party vendors and assert control back over any of those services that have been compromised. Having proper credentials, such as user name, password, answers to challenge questions, and so on, is crucial. Similar to a cyber incidence, these credentials themselves may be compromised.

Backups, Restorations, Data Recording and Auditing

Since AI is often embodied in software, it is recommended to back-up various iterations not only to be able to revert to older working versions, but also to enable forensics to determine how the AI misbehaved (or not). Furthermore, much of an AI may be embodied in data, such as neural networks, rules, and statistical data, that also should be backed up. As part of an IT or product function, the company should audit whether these backups are properly performed, and whether data and AI software can be restored. During an AI incident response, such restoration and forensics may be crucial to minimizing harm to a company.

Conclusion

AI brings great promise to business but should be adopted with prudence. In this series, we discussed general considerations for an AI policy and AI incident response plan, but each company is different and their policies and plans should be created and maintained on a case-by-case basis. The policies and plans should evolve with changing legal and ethical standards. In addition to a company’s existing policies and processes, such an AI policy and incident response plan hopefully will help the company thrive in the Fourth Industrial Revolution.