People may only want to know about how an AI system has reached a decision depending on the “context” in which it is made, rather than seek transparency on every AI-generated decision, according to research carried out by U.K. data regulator the Information Commissioner’s Office.

Factors such as the urgency of the decision, its impact, and significance might outweigh a data subject’s wish to know more about the decision-making process, suggesting that a “one size fits all” approach to explaining AI-generated results is unworkable.

Neil Hodge is a freelance business journalist and photographer based in Nottingham, United Kingdom. He writes on insurance and risk management, corporate governance, internal audit, compliance, and legal...