BRUSSELS—EU data regulators are increasingly concerned about the potential impact that organisations’ growing reliance on artificial intelligence and algorithms could have on business decision making.

Regulators are concerned that management accountability will be impaired if companies delegate too much of their responsibility for decision making to machines. They also admit that this “grey area” presents problems for them about how best to hold companies to account if algorithms, AI, and machine-learning technologies are largely responsible for how personal data is used (or misused).

Speaking at the 40th International Conference of Data Protection and Privacy Commissioners in October, U.K. Information Commissioner Elizabeth Denham said that companies’ ability to ensure transparency, fairness, and accountability “remain core” to how personal data is used—and protected—in the digital world, adding that “regulators can’t tackle these issues on our own.”

She also said that organisations need to “think seriously” about how they explain to customers and stakeholders that machines are in charge of how people’s data is used and for what purpose.

“Companies need to explain to customers, their boards, investors, and regulators why algorithmic solutions are being used to make important decisions, and to what extent,” said Denham. “They also need to explain what controls are in place to ensure that the decisions made due to these algorithms are in the company’s best interests and that personal data is not being misused in the process.”

She added that “few companies think about this, currently.”

Giovanni Buttarelli, European Data Protection Supervisor, had earlier told attendees at the conference of the need to understand the ethics behind increased around AI usage and how technologies use data to inform decision making. “We are fast approaching a period where design, deployment, and control of new technologies and technological processes are delegated to machines,” he warned.

Buttarelli flagged several areas in which algorithmic decision making has been left to machines, such as in killer drones and criminal sentencing, and by social media companies “whose unaccountable algorithmic decision making has been weaponised by bad actors in ethnic conflict zones, with at times appalling human consequences, notably in Myanmar.”

“When people talk about regulating algorithms, I don’t know what they mean. In terms of design, the same algorithms that have influenced politics on social media are technically similar in many ways to those that are being used to develop breakthroughs in medicine and research. We must not regulate for the sake of regulating. In the long run that can be just as harmful.”

Pascale Fung, Professor, Hong Kong University of Science and Technology

Regulators have already agreed a response. On 23 October data commissioners from several EU member states, as well as Canada, Hong Kong, Argentina, and the Philippines, agreed to a set of guiding principles regarding ethics, monitoring, and enforcement on AI. These underlined that developments and increased use of the technology need to ensure fairness, transparency, and accountability and that users must retain control over their own data.

Leading tech companies are beginning to question their use of personal data, as well as how technology uses it. Apple CEO Tim Cook told attendees that “advancing AI by collecting huge personal profiles is laziness, not efficiency” and said that “platforms and algorithms that promised to improve our lives can actually magnify our worst human tendencies.”

In May, Facebook—following criticism surrounding the Cambridge Analytica furore—announced that it is testing a tool called “Fairness Flow” that it hopes can determine whether a machine learning algorithm is biased against certain groups of people based on race, gender, or age.

However, Pascale Fung, professor at the Department of Electronic & Computer Engineering, Hong Kong University of Science and Technology, urged caution when considering regulating algorithms and their development and use.

“When people talk about regulating algorithms, I don’t know what they mean,” she said. “In terms of design, the same algorithms that have influenced politics on social media are technically similar in many ways to those that are being used to develop breakthroughs in medicine and research. We must not regulate for the sake of regulating. In the long run that can be just as harmful.”