The U.K.’s data regulator, the Information Commissioner’s Office (ICO), has issued guidance to help organizations explain their use of—and reliance on—artificial intelligence (AI) in decision making and how such technology might impact the public.

Working alongside The Alan Turing Institute, the U.K.’s national institute for data science and AI, the ICO has launched a consultation on the joint draft guidance, called “Explaining decisions made with AI,” which aims to give organizations practical advice to help explain to individuals the processes, services, and decisions delivered or assisted by AI.

“The decisions made using AI need to be properly understood by the people they impact. This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built in to AI systems.”

Simon McDougall, Executive Director for Technology Policy and Innovation, ICO

In its interim report released in June, the ICO found “context” was key to the explainability of AI decisions—with the majority of people stating that in contexts where humans would usually provide an explanation, explanations of AI decisions should be similar to human explanations. ICO research released in July shows over 50 percent of respondents were concerned about machines making complex automated decisions about them.

The ICO guidance consists of three parts:

Part 1: The basics of explaining AI defines the key concepts and outlines a number of different types of explanations about the use of AI in decision making, as well as the importance of enabling people to challenge the decisions that have been made.

Part 2: Explaining AI in practice helps organizations with the practicalities of explaining these decisions and providing explanations to individuals. This will primarily be helpful for the technical team in an organization, although data protection officers and compliance teams will also find it useful.

Part 3: What explaining AI means for your organization goes into the various roles, policies, procedures, and documentation organizations can put in place to ensure they are set up to provide meaningful explanations to affected individuals. This is primarily targeted at senior management teams, although data protection officers and compliance teams will also find it useful.

The draft guidance goes into detail about different types of explanations, how to extract explanations of the logic used by the system to make a decision, and how to deliver explanations to the people they are about. It also outlines different types of explanation and emphasizes the importance of using inherently explainable AI systems.

Six types of explanation

The ICO has identified six main types of explanation about the use of AI in decision making:

  1. Rationale explanation—the reasons that led to a decision, delivered in an accessible and non-technical way.
  2. Responsibility explanation—who is involved in the development, management, and implementation of an AI system, and whom to contact for a human review of a decision.
  3. Data explanation—what data has been used in a particular decision and how; and what data has been used to train and test the AI model (and how).
  4. Fairness explanation—the steps taken across the design and implementation of an AI system to ensure the decisions it supports are generally unbiased and fair, and whether or not an individual has been treated equitably.
  5. Safety and performance explanation—the steps taken across the design and implementation of an AI system to maximize the accuracy, reliability, security, and robustness of its decisions and behaviors.
  6. Impact explanation—the impact the use of an AI system and its decisions has or may have on an individual, as well as on wider society.

Source: Information Commissioner’s Office 

Additionally, the draft guidance lays out four key principles, rooted within the EU’s General Data Protection Regulation (GDPR), the ICO says organizations “must consider” when developing AI decision-making systems. They are:

  1. Be transparent: Make your use of AI for decision making obvious, and appropriately explain the decisions you make to individuals in a meaningful way.
  2. Be accountable: Ensure appropriate oversight of your AI decision systems and be answerable to others.
  3. Consider context: There is no one-size-fits-all approach to explaining AI-assisted decisions.
  4. Reflect on impacts: Ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome.

The ICO’s guidance highlights the risks facing organizations for failing to inform the public about how technology-assisted decision making may impact them, such as regulatory action, reputational damage, and public distrust. But it also raises the potential for risks to organizations that do explain how AI is being used. For example, providing too much information about AI-assisted decisions may lead to increased public distrust due to the complex, and sometimes opaque, nature of the process. Similarly, too much disclosure may expose commercially sensitive information, while sharing personal data with third parties may violate the GDPR and other data laws. Organizations may also need to protect against the risk people may “game” or exploit their AI models if they know too much about the reasons underlying its decisions.

“The decisions made using AI need to be properly understood by the people they impact. This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built in to AI systems,” says Simon McDougall, the ICO’s executive director for technology policy and innovation.

The consultation runs until Jan. 24, 2020. The final version of the guidance will be published later next year. The ICO is accepting comments.