← Horiz Logo

A Tech Adoption Guide for Lawyers

in partnership with Legal Tech Publishing

AI, Artificial Intelligence

Towards an AI Policy and AI Incident Response Plan for Businesses (Part II)

Part two of a three-part series on the motivation for AI policy and incidence plans for businesses.

Previously, we discussed the motivation for a business to adopt an AI policy, including models from industry groups and governments. Now, we turn to common themes and issues that should be covered by an AI policy.  

Scope of the AI Policy

While there are many policies proposed by industry groups, such as the IEEE, or by industry technology leaders, such as Google and Microsoft, there remains to this date no clear-cut AI industry standards. As the industry moves towards standardization, companies should proactively use the broad principles suggested in this article when developing their own internal- and external-facing AI policies. External AI policies should be aspirational and simpler, while internal AI policies can be more prescriptive, especially when they are meant for vendor management. The AI policy should apply to the usage of AI by employees, contractors, and customers, as well as an AI product itself. The AI policy should be integrated with the company’s other policies, including vendor policies, ethical codes, cyber incident response plans, and crisis management plans. The company should also consider whether the AI policy should be narrowly focused on certain types of AI, such as chatbots, or broadly focused on all automated systems, and should carefully define the scope of the policy.

Safety

Safety has long been a concern with the usage of AI, as highlighted by OSHA’s Guidelines for Robotics Safety, for example.  Under an AI policy, the AI and its use should be safe:

  • The AI product should be designed, produced, developed, and/or manufactured according to specifications that ensures safety.
  • Safety not only includes minimizing risks of torts to people and property, but also minimizing risk of cyber or privacy intrusion, data loss, and other violations of law.
  • Like for other products, an AI manufacturer should provide clear instructions for safely using the AI and adequately warning of the dangers of using its products.  This includes, as applicable, post-sale warnings.
  • People should choose how and whether to delegate decisions to AI systems, and be able to monitor that decision-making as with tasks delegated to humans. They should also have the ability to take over the AI’s operations in emergencies.
  • If an AI system causes harm, there should be recorded data and mechanisms to discover why.
  • The principles of safety should apply to physical (e.g., self-flying drones) and to non-physical AI (e.g., chatbots that operate in cyberspace

Questions to ask:

  • Does the product meet legal safety requirements?
  • Is the AI product legal and operated legally?
  • Does the manufacturer properly recall or issue fixes of AI products, where obligated?

Ethical and Beneficial

  • The AI, its production, and deployment should be beneficial (or at least not detrimental) to the company and its customers, and to society in general.
  • Deployment of AI should take into account the needs and viewpoints of the company’s various stakeholders.
  • The usage of AI should take into account accessibility for those with disabilities, both enhancing access where possible and minimizing impacts on the disabled (for example, an online chatbot might also have a voice interface, or vice versa).
  • The usage of the AI should align with the company’s ethical codes and principals.

Questions to ask:

  • Does the AI promote civil activities, where appropriate (e.g., AI tools that do not hinder freedom of speech or assembly)?
  • Depending on the industry, does the AI accommodate diverse populations?

AI Bias, Explainability and Transparency

  • Before using an AI, the technology should be determined not to have built-in bias due to its programming or its data.
  • The company should ensure that the AI vendors providing the tool to the company are aware and take into account potential for bias, including disparate impact.

Questions to ask:

  • Can the result of the AI’s decision be explained in a meaningful and lawful way to affected stakeholders, where appropriate?
  • Is the training set examined to minimize potential of data bias?
  • Does the AI’s data and machine-learning operations reinforce bias? Do the operations fail to or give poor performance for certain segments of the population due to age, gender, ethnicity, etc.?
  • Does an AI identify itself as an AI where appropriate or required by law?

Monitoring, Accountability, Controls, and Oversight

The company should have control and oversight on what the AI does and how it operates. This should be developed not just from a product standpoint, but also from a legal and corporate policy standpoint.

  • The use of AI should be monitored for potential legal and ethical issues.
  • The AI should be designed to retain records and to allow for the re-creation of decision-making steps or processes, especially when accidents might occur.
  • Legal counsel should be part of the process of accountability, controls, and oversight in order to protect the attorney-client privilege as well as to ensure legal compliance.
  • The AI and its usage should be audited and auditable.

Questions to ask:

  • Is there a single officer, director, or manager, such as a Chief Artificial Intelligence Officer, who oversees the company’s AI program?
  • Does the company understand the AI and its risks?
    • Is the AI semi-autonomous or fully autonomous?
    • Does the AI incorporate machine learning or is it static?
    • Are people interacting directly with the AI, and how?
  • How does the company know if the AI is operating properly?
  • Is the keeping of AI information part of the company’s records retention policy?

Third Party Usage

A company should consider how their vendors, customers, or third parties use their AI, and what restrictions they should put in place.

  • Limits on the use of the company’s AI should be included in applicable terms of use.
  • As with other company policies, such as ethical procurement, a company should require  AI vendors to comply with the company’s policies as if the company itself were developing the AI provided by their vendors.
  • The AI policy should be part of general supply chain policies that enforces an ethical code of conduct.

Question to ask:

  • Do the vendors understand the risk posed by their AI?
  • Do the vendors for AI services and/or products used by the company have processes in place to comply with the company’s AI principles and applicable law?
  • Are users adequately informed of the risks and responsibilities in their use of company AI?

Intellectual Property in AI

  • A company’s AI data and software should be protected by agreements and reasonable security processes and procedures.

Workplace AI

  • The AI policy should be consistent with HR policies, including privacy notices.
  • Employees should acknowledge that AI may be used to monitor them.
  • Employees should acknowledge their responsibility to properly operate company AI.
  • To the extent required by law, employees should be notified that employment decisions may be based on AI or its output.

GDPR

To the extent a company is subject to the GDPR, an AI policy may need to align with the company’s overall privacy and GDPR policies. Specifically, the AI policy should require compliance with GDPR Article 22, which states:

  1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
  2. Paragraph 1 shall not apply if the decision:
    • is necessary for entering into, or performance of, a contract between the data subject and a data controller;
    • is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
    • is based on the data subject’s explicit consent.
  3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.
  4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.

And the AI policy should incorporate Article 15(1)(h):

The data subject shall have the right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, and, where that is the case, access to the personal data and the following information:

… (h) the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

Questions to ask:

  • Does the AI policy and incident response plan safeguard the data subject’s rights under the GDPR?
  • How do data subjects exercise their rights under the AI policy and the GDPR?
  • Is there a contact email and is an AI team member responsible for responding?

Preparedness

While some of the above principles can be part of an external-facing AI policy, an internal policy should also place an emphasis on preparedness for when an AI may go wrong . . . enter an AI incidence response plan.

  • As part of the AI policy, the company should test and refine its AI incidence response plan regularly, as it would do for a security incidence or other crisis management plan.
  • Team members involved in the AI incident response should be trained to proactively spot issues, know their responsibilities, and rehearse how to respond.

Question to ask:

  • Are all necessary preventive steps required by the AI incidence response plans taken?
  • Are regular backups performed?
  • Are credentials, including passwords and network credentials properly recorded?

Conclusion

By being proactive about shaping the industry standards in AI policies, companies will be able to minimize their risk and create business opportunities. As the UK’s HOUSE OF LORDS

Select Committee on Artificial Intelligence noted, quoting various industry experts:

… “use of ethics boards and ethics reviews committees and processes within a self-regulatory framework will be important tools” … “the companies employing AI technology, to the extent they demonstrate they have ethics boards, review their policies and understand their principles, will be the ones to attract the clients, the customers, the partners and the consumers more readily than others that do not or are not as transparent about that”.

In our the next part of our article, we turn to an AI incidence response plan. Stay tuned…