The U.K.’s Information Commissioner’s Office (ICO) this week released guidance to help organizations explain how artificial intelligence is used in decision making and how the technology uses personal data to form judgments.

The 122-page publication, called “Explaining decisions made with AIand written in conjunction with The Alan Turing Institute, the U.K.’s national center for AI, hopes to ensure organizations can be transparent about how AI-generated decisions are made, as well as ensure clear accountability about who can be held responsible for them so that affected individuals can ask for an explanation.

The guidance consists of three parts:

  • Part 1 on “The basics of explaining AI” is aimed at organizations’ designated data protection officers (DPOs) and compliance teams and defines the key concepts.
  • Part 2 on “Explaining AI in practice,” which helps organizations with the practicalities of explaining these decisions and providing explanations to individuals, is aimed at technical teams, though the ICO says DPOs and compliance teams will also find it useful.
  • Part 3 on “What explaining AI means for your organization” is primarily aimed at senior management and goes into the various roles, policies, procedures, and documentation that you can put in place to ensure your organization is set up to provide meaningful explanations to affected individuals. However, compliance functions will also find it useful.

Below are six key takeaways from the guidance:

1. Data protection law is technology neutral. It does not directly reference AI or any associated technologies such as machine learning. However, the General Data Protection Regulation (and the U.K.’s 2018 Data Protection Act) does have a significant focus on large-scale automated processing of personal data, and several provisions specifically refer to the use of profiling and automated decision-making. This means data protection law applies to the use of AI to provide a prediction or recommendation about someone.

For example, the GDPR has specific requirements around the provision of information about, and an explanation of, an AI-assisted decision where: 

  • It is made by a process without any human involvement; and
  • It produces legal or similarly significant effects on an individual (something affecting an individual’s legal status/rights, or that has equivalent impact on an individual’s circumstances, behavior, or opportunities, such as a decision about welfare or a loan).

2. The guidance says that any explanation about how AI is used in decision-making needs to address the processes used in making decisions and how outcomes are reached as a result. The ICO has also identified six main types of explanation:

  • Explanation regarding the rationale behind the decision;
  • Explanation regarding who is responsible for making the decision;
  • Explanation regarding what data has been used to make the decision and how;
  • Explanation to ensure the decision was made fairly;
  • Explanation to provide reassurance the AI system is performing safely; and
  • Explanation to ensure the AI system is being monitored for its impact on individuals and society.

3. To ensure the decisions you make using AI are explainable, the guidance says organizations should follow four principles: be transparent; be accountable; consider the context you are operating in; and reflect on the impact of your AI system on the individuals affected, as well as wider society.

4. To help design and deploy appropriately explainable AI systems, the ICO guidance outlines six tasks organizations should carry out to meet with customer and regulatory expectations about how personal data is gathered, processed, and used in decision-making. They are:

  • Select priority explanations by considering the domain, use case, and impact on the individual;
  • Collect and pre-process your data in an explanation-aware manner;
  • Build your system to ensure you are able to extract relevant information for a range of explanation types;
  • Translate the rationale of your system’s results into useable and easily understandable reasons;
  • Prepare implementers to deploy your AI system; and
  • Consider how to build and present your explanation.

5. At the core of the guidance is the need for organizations to ensure transparency about how decisions are made and accountability about who is responsible for them—including the product manager, implementer, AI development team, compliance function, DPO, and senior management.

The ICO suggests compliance teams (including the DPO) and senior management should expect assurances from the product manager that the system the organization is using provides the appropriate level of explanation to decision recipients. Furthermore, compliance and senior management should ensure they have a “high level” understanding of the systems and types of explanations these AI systems should and do produce.

Additionally, says the ICO, there may be occasions when the DPO and/or compliance functions need to interact directly with decision recipients—for example, if a complaint has been made. In these cases, compliance teams will need a more detailed understanding of how a decision has been reached, and they will need to be trained on how to convey this information appropriately to affected individuals.

6. Compliance functions will need to be aware that their organizations’ AI system may be subject to external audit—perhaps even by the ICO—to assess whether it is complying with data protection law. During such an audit, organizations will need to produce all documentation they have prepared, as well as the testing they have undertaken, to ensure the AI system is able to provide the different types of explanation required that could be suitably understood by those overseeing the system and monitoring it; regulators; and those affected by the decisions (decision recipients).