← Horiz Logo

A Tech Adoption Guide for Lawyers

in partnership with Legal Tech Publishing

AI, Artificial Intelligence

Towards an AI Policy and AI Incident Response Plan for Businesses (Part 1)

Part one of a three-part series on the motivation for AI policy and incidence plans for businesses.

Business Opportunities and Business Risks Posed by AI

Artificial Intelligence (“AI”) is becoming a ubiquitous part of products and services and with that pervasiveness comes the risk for accidents or incidents involving AI that may cause financial, legal, or reputational harm to a company. There are various definitions of AI but, generally, AI is a form of autonomous or semi-autonomous technology that performs actions people usually consider intelligent, such as speaking with customers, driving cars, recognizing faces, or granting or denying applications for services based on various factors. Indeed, some commentators, such as Professor Klaus Schwab, founder and executive chairman of the World Economic Forum, have considered the adoption of AI as a major part of the current Fourth Industrial Revolution. However, regardless of the definition of AI, the risk and opportunity for use of such technologies is real.

In this three-part series, we will discuss the motivation for an AI policy and incidence plan for businesses. In this part one, we discuss the current legal and business motivation for establishing an AI policy. Part two will examine themes for an AI policy for business, and in part three, we discuss an AI incidence response plan. Not only should a company have an affirmative and aspirational set of AI policy, it should put in place concrete plans to responds to AI related incidences when things go wrong.

Why Have an AI Policy and Incidence Response Plan?

Currently, AI is most often used in high-profile areas of business, and not just for back-office data analysis. We are seeing it integrated into products, such as self-driving cars, and employed to make decisions on behalf of companies regarding hiring or promoting employees or to direct interactions with customers. As this trend continues, there is real social anxiety as to whether AI is being used responsibly by companies and whether people will be harmed by it. In this article, we will discuss general considerations for an AI policy and AI incident plan that could be used to mitigate the risk posed by adopting AI in businesses.

It should be noted that an AI policy and incident plan should not supplant existing policies and processes. Instead, it should improve on areas where appropriate and integrate with existing and working processes and procedures, such as software development processes, cyber security incident response plans, HR policies, and crisis management plans. Throughout our article, we will pose hypotheticals of AI usage to tease out the issues. We begin with this one:

 

 

 

 

Current Laws and Rules Affecting AI Usage

While an AI policy and incident plan may not be generally required by law, there is some regulatory guidance in certain industries. For example, OSHA has long ago issued Guidelines for Robotics Safetywhich requires, among other things:

The proper selection of an effective robotics safety system must be based on hazard analysis of the operation involving a particular robot. Among the factors to be considered in such an analysis are the task a robot is programmed to perform, the start-up and the programming procedures, environmental conditions and location of the robot, requirements for corrective tasks to sustain normal operations, human errors, and possible robot malfunctions. 

For industries operating non-physical AI such as banks,  these industries may be required to have cyber incident response plans (e.g., New York State Cybersecurity Regulation (23 NYCRR 500)  requires covered entities to have written security policies), and these cyber incidents may encompass AI incidents. To the extent the GDPR applies to a company,  the company’s policy should account for the requirements of Article 22 of the GDPR that controls automated processing and Article 15 of the GDPR that permits that data subjects to receive meaningful information about the logic involved in automated decisioning related to the data subject.

Moreover, AI researchers and regulators, especially in the financial industry, have highlighted the danger of AI bias and the need for explainable AI. Existing law prohibits bias in certain financial decisions and requires explanation for adverse decisions that may be made by AI systems. Under the Fair Credit Reporting Act (“FCRA”), 15 U.S.C. § 1681 et seq., among other requirements, any financial institution that uses a credit report or another type of consumer report to deny a consumer’s application for credit, insurance, or employment – or to take another adverse action against the consumer – must tell the consumer, and must give the consumer the name, address, and phone number of the agency that provided the information. Upon the request of a consumer for a credit score, a consumer reporting agency shall supply to the consumer a statement and notice that includes “all of the key factors that adversely affected the credit score of the consumer in the model used”, and any consumer reporting agency shall provide trained personnel to explain to the consumer any information required to be furnished to the consumer under the Act (15 U.S.C. §1681g (f) and (g); see also 15 U.S.C. §1681m for requirements of adverse action notices).  And the Equal Credit Opportunity Act (“ECOA”), 15 U.S.C. § 1691 et seq. states:

(a) ACTIVITIES CONSTITUTING DISCRIMINATION It shall be unlawful for any creditor to discriminate against any applicant, with respect to any aspect of a credit transaction—

(1) on the basis of race, color, religion, national origin, sex or marital status, or age (provided the applicant has the capacity to contract);

(2) because all or part of the applicant’s income derives from any public assistance program; or

(3) because the applicant has in good faith exercised any right under this chapter.

Accordingly, not only would it be prudent for a company to enact AI policies to minimize impermissible bias and promote explainability, it may be required in certain industries. Certain industries may also require its profession to not in engage in discriminatory acts, and these anti-bias concerns may also be part of a professional’s policies. For example, the American Bar Association’s Model Rules of Professional Conduct Section 8.4(g), which has been adopted by one state and is being considered by numerous others, states that:

It is professional misconduct for a lawyer to:

… 

(g) engage in conduct that the lawyer knows or reasonably should know is harassment or discrimination on the basis of race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status in conduct related to the practice of law.

Given the above context and the increasingly high-profile incorporation of AI into daily business operations, as well as the public’s heightened anxiety around its use, it is important for companies to have a comprehensive AI policy to guide safe and responsible adoption of AI as well as an AI incident response plan to respond if things go astray.

Industry Standards For AI Usage

There are many policies proposed by specific companies and industry groups, which can inform a company’s own AI policies, but there remains to this date no clear-cut AI industry standards similar to the accessibility standards recommended by the W3C . However, technology firms are quickly moving to put ethical guard rails around AI. The various current policies do have some common themes: fairness, accountability, transparency, anti-bias, and accountability to people.

Under Microsoft’s “FATE: Fairness, Accountability, Transparency, and Ethics in AI,” the company focuses on the four themes. In addition to these themes, a search of Microsoft’s publications on ethics shows the company’s concerns about human computer interactions, compliance, big data usage, and more.

Google has set forth its “AI at Google: our principles,” which among other things aspires to:

  1. Be socially beneficial. 
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.  

Google is also actively engaging stakeholders on the issue of AI governance.

IBM’s “Everyday Ethics for Artificial Intelligence: A practical guide for designers & developerssets forth Five Areas of Ethical Focus: Accountability, Value Alignment, Explainability, Fairness and User Data Rights.

IEEE’s Ethically Aligned by Design states:

The ethical design, development, and implementation of these technologies should be guided by the following General Principles:

  • Human Rights: Ensure they do not infringe on internationally recognized human rights
  • Well-being: Prioritize metrics of well-being in their design and use
  • Accountability: Ensure that their designers and operators are responsible and accountable
  • Transparency: Ensure they operate in a transparent manner
  • Awareness of misuse: Minimize the risks of their misuse

The AAAI and the ACM have had Conference on AI, Ethics, and Society to discuss some of the above issues.

The non-profit organization, OpenAI was established to “to build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible” as their core mission. Recently, OpenAI decided to withhold publishing the source code for their recent text generator. It was deemed too dangerous for publication because it could be used to, among other things, create fake news. 

Wired reported that:

Google, too, has decided that it’s no longer appropriate to innocently publish new AI research findings and code. Last month, the search company disclosed in a policy paper on AI that it has put constraints on research software it has shared because of fears of misuse. The company recently joined Microsoft in adding language to its financial filings warning investors that its AI software could raise ethical concerns and harm the business.

The technology industry should not be on the only industry concerned by the ethical usage of AI. While particular industries may require different ethical standards, because for example, different companies may more directly affect consumers, make financial decisions, or impact the health and safety of the public, or enhance access to resources by those with disabilities,  all companies that use or produce AI should take notice of this issue.

AI Ethical Standards Being Examined by Governments

It should be noted that some governments are grappling with AI policies, and those developments can and should inform a company’s AI policy. Companies that perform services for governments may in the future be contractually be required to follow these standards. Moreover, as regulators increasingly examine company’s usage of AIs, they will look to the standards that they have themselves have adopted in considering laws and regulatory actions. The themes of the AI policies include safety, transparency and explainability of AI decisions, human-centered decision making, fairness and anti-bias, and protection of personal rights against AI acts.

As an example, California has adopted ACR-215 23 Asilomar AI Principles (2017-2018):

Section II: Ethics and Values

(6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

(7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

(8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

(9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

(10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

(11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

(12) Personal Privacy: People should have the right to access, manage, and control the data they generate, given AI systems’ power to analyze and utilize that data.

(13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

(14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

(15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

(16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

(17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

(18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

In another example, New York City has passed its Automated decision systems used by agencies law (2018) which establishes a task force to make recommendations on the following:

(a) Criteria for identifying which agency automated decision systems should be subject to one or more of the procedures recommended by such task force pursuant to this paragraph;

(b) Development and implementation of a procedure through which a person affected by a decision concerning a rule, policy or action implemented by the city, where such decision was made by or with the assistance of an agency automated decision system, may request and receive an explanation of such decision and the basis therefor;

(c) Development and implementation of a procedure that may be used by the city to determine whether an agency automated decision system disproportionately impacts persons based upon age, race, creed, color, religion, national origin, gender, disability, marital status, partnership status, caregiver status, sexual orientation, alienage or citizenship status;

(d) Development and implementation of a procedure for addressing instances in which a person is harmed by an agency automated decision system if any such system is found to disproportionately impact persons based upon a category described in subparagraph (c);                      

(e) Development and implementation of a process for making information publicly available that, for each agency automated decision system, will allow the public to meaningfully assess how such system functions and is used by the city, including making technical information about such system publicly available where appropriate; and

(f) The feasibility of the development and implementation of a procedure for archiving agency automated decision systems, data used to determine predictive relationships among data for such systems and input data for such systems, provided that this need not include agency automated decision systems that ceased being used by the city before the effective date of this local law.

Across the Atlantic, the UK’s HOUSE OF LORDS, Select Committee on Artificial Intelligence, Report of Session 2017–19 issued its AI report, “AI in the UK: ready, willing and able?”, which advises:

[W]e suggest five overarching principles for an AI Code:

(1) Artificial intelligence should be developed for the common good and benefit of humanity.

(2) Artificial intelligence should operate on principles of intelligibility and fairness.

(3) Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

(4) All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

(5) The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Similarly but in more detail, the European Commission’s HIGH-LEVEL EXPERT GROUP ON ARTIFICIAL INTELLIGENCE DRAFT ETHICS GUIDELINES FOR TRUSTWORTHY AI EXECUTIVE SUMMARY, Working Document for stakeholders’ consultation, Brussels, 18 December 2018 set forth the following guidance:

Chapter I: Key Guidance for Ensuring Ethical Purpose:

– Ensure that AI is human-centric: AI should be developed, deployed and used with an “ethical purpose”, grounded in, and reflective of, fundamental rights, societal values and the ethical principles of Beneficence (do good), Non-Maleficence (do no harm), Autonomy of humans, Justice, and Explicability. This is crucial to work towards Trustworthy AI.

– Rely on fundamental rights, ethical principles and values to prospectively evaluate possible effects of AI on human beings and the common good. Pay particular attention to situations involving more vulnerable groups such as children, persons with disabilities or minorities, or to situations with asymmetries of power or information, such as between employers and employees, or businesses and consumers.

– Acknowledge and be aware of the fact that, while bringing substantive benefits to individuals and society, AI can also have a negative impact. Remain vigilant for areas of critical concern.

Chapter II: Key Guidance for Realising Trustworthy AI:

– Incorporate the requirements for Trustworthy AI from the earliest design phase: Accountability, Data Governance, Design for all, Governance of AI Autonomy (Human oversight), Non-Discrimination, Respect for Human Autonomy, Respect for Privacy, Robustness, Safety, Transparency.

– Consider technical and non-technical methods to ensure the implementation of those requirements into the AI system. Moreover, keep those requirements in mind when building the team to work on the system, the system itself, the testing environment and the potential applications of the system.

– Provide, in a clear and proactive manner, information to stakeholders (customers, employees, etc.) about the AI system’s capabilities and limitations, allowing them to set realistic expectations. Ensuring Traceability of the AI system is key in this regard.

– Make Trustworthy AI part of the organisation’s culture, and provide information to stakeholders on how Trustworthy AI is implemented into the design and use of AI systems. Trustworthy AI can also be included in organisations’ deontology charters or codes of conduct.

– Ensure participation and inclusion of stakeholders in the design and development of the AI system.

Moreover, ensure diversity when setting up the teams developing, implementing and testing the

– Strive to facilitate the auditability of AI systems, particularly in critical contexts or situations. To the extent possible, design your system to enable tracing individual decisions to your various inputs; data, pre-trained models, etc. Moreover, define explanation methods of the AI system.

– Ensure a specific process for accountability governance.

– Foresee training and education, and ensure that managers, developers, users and employers are aware of and are trained in Trustworthy AI.

– Be mindful that there might be fundamental tensions between different objectives  transparency can open the door to misuse; identifying and correcting bias might contrast with  privacy protections). Communicate and document these trade-offs.

– Foster research and innovation to further the achievement of the requirements for Trustworthy AI.

Chapter III: Key Guidance for Assessing Trustworthy AI

– Adopt an assessment list for Trustworthy AI when developing, deploying or using AI, and adapt it to the specific use case in which the system is being used.

– Keep in mind that an assessment list will never be exhaustive, and that ensuring Trustworthy AI is not about ticking boxes, but about a continuous process of identifying requirements, evaluating solutions and ensuring improved outcomes throughout the entire lifecycle of the AI system.

This guidance forms part of a vision embracing a human-centric approach to Artificial Intelligence, which will enable Europe to become a globally leading innovator in ethical, secure and cutting-edge AI. It strives to facilitate and enable “Trustworthy AI made in Europe” which will enhance the well-being of European citizens.

Conclusion

As the reader will see in parts two and three of this series, some of the above legal, governmental and industry proposals and concerns are echoed in this article in the AI policy and incident response plan suggestions, and a company may wish to be guided by the above concerns. In the next two parts of our series, we will discuss specific themes that could be covered in an AI policy for businesses, and what an AI incidence response plan might look like. Stay tuned…