People may only want to know about how an AI system has reached a decision depending on the “context” in which it is made, rather than seek transparency on every AI-generated decision, according to research carried out by U.K. data regulator the Information Commissioner’s Office.

Factors such as the urgency of the decision, its impact, and significance might outweigh a data subject’s wish to know more about the decision-making process, suggesting that a “one size fits all” approach to explaining AI-generated results is unworkable.

Such an attitude might benefit the AI industry, which says that while it “felt confident” it could technically explain the decisions made by AI, such “explainability” can be hampered by cost, “commercial sensitivities,” and the potential for “gaming” or abuse of systems.

The lack of a standard approach to establishing internal accountability for explainable AI decision systems also emerged as a challenge for developers, said the Information Commissioner’s Office (ICO).

What does the GDPR say about AI decision making?

 

The EU’s General Data Protection Regulation (GDPR) is “technology neutral,” so it does not directly reference AI. However, it has a significant focus on large-scale automated processing of personal data, specifically addressing the use of automated decision making. As such, several provisions within the regulation are highly relevant to the use of AI for decision making.

 

For instance, Principle 1. (a) requires personal data processing to be fair, lawful, and transparent, while Articles 13-15 give individuals the right to be informed of the existence of solely automated decision making, meaningful information about the logic involved, and the significance and envisaged consequences for the individual.

 

Meanwhile, Article 35 requires organizations to carry out data protection impact assessments (DPIAs) when what they are doing with personal data, particularly when using new technologies, is likely to have high risks for individuals.

 

But it is Article 22 that might prove more problematic, both for developers and the organizations that use the AI systems.

 

Article 22 gives individuals the right not to be subject to a solely automated decision producing legal or similarly significant effects, and Article 22(3) obliges organizations to adopt suitable measures to safeguard individuals when using solely automated decisions, including the right to obtain human intervention, to express his or her view, and to contest the decision.

 

Recital 71 provides interpretative guidance of Article 22. It says individuals should have the right to obtain an explanation of a solely automated decision after it has been made.

 

Source: Information Commissioner’s Office

The findings appear to be at odds with the concerns of data regulators, particularly in Europe, who want to ensure that transparency, fairness, and accountability “remain core” to how personal data is used in AI decision-making systems.

European Data Protection Supervisor Giovanni Buttarelli told attendees at the 40th International Conference of Data Protection and Privacy Commissioners in Brussels last October about the need to understand the ethics behind increased AI usage and how technologies use data to inform decision making. “We are fast approaching a period where design, deployment, and control of new technologies and technological processes are delegated to machines,” he warned.

Last year the U.K. government asked the ICO and The Alan Turing Institute, the country’s national institute for data science and artificial intelligence, to produce practical guidance for organizations to assist them with explaining AI decisions to the individuals affected. The initiative is called “Project ExplAIn.”

In an interim report released June 3, the ICO found that the importance of providing explanations to individuals—and the reasons for wanting them—changes dramatically depending on what the decision is about.

Based on feedback from “citizen juries”—effectively, members of the public who heard evidence from AI experts—jurors felt that explaining an AI decision to the individual affected was more important in areas such as recruitment and criminal justice than in healthcare, for example.

This was because jurors wanted more detailed explanations about decisions made in a recruitment or criminal justice context so they could challenge them, learn from them, and check they had been treated fairly.

In healthcare settings, however, jurors preferred to know a decision was accurate rather than why it was made.

The ICO said the findings showed there is a “need for improved education and awareness around the use of AI for decision making.” It added that there is also a need for board-level buy-in on explaining AI decisions, while a “standardized approach to internal accountability to help assign responsibility for explainable AI decision systems” is also necessary.

In the interim report, the ICO outlined a number of areas likely to be in the first draft of the Project ExplAIn guidance, which it aims to put out for consultation this summer before releasing a final version in autumn.

It includes setting out the legal requirements around AI decisions, including data protection and other relevant regimes, as well as enshrining corporate responsibility in the three key principles of transparency, context, and accountability.

The guidance will also likely include sections on organizational controls (such as the reporting lines around AI decision making, as well as policies on training and risk management) and technical controls on data collection and use.

The first draft will also feature a section on “explanation delivery” outlining the “contexts” in which data subjects will be told about how decisions are made.