It has been four years since the European Union’s flagship data privacy legislation came into force, but concerns are already being raised about whether the General Data Protection Regulation (GDPR) is being outpaced by technological developments and their use of data.

Data protection authorities (DPAs) broadly believe the regulation’s underlying principles of lawfulness, fairness, and transparency make it “future-proof” to cover developments in artificial intelligence (AI), machine learning, cloud computing, and data in a way its predecessor, the 1995 EU Data Protection Directive, failed to do.

Many legal experts also believe the GDPR is flexible enough to cope with emerging technologies. Will Richmond-Coggan, director and a specialist in data protection and new technology at law firm Freeths, said, “Although it is often presented as a conflict, the reality is there is very little which technology might make possible that the (U.K. or EU) GDPR would outright prohibit.”

James Castro-Edwards, privacy and cyber counsel at law firm Arnold & Porter, said, “While the GDPR may not have been drafted with these new technologies specifically in mind, the broad principles of lawfulness, fairness, and transparency still apply, along with a number of additional requirements for higher risk processing.”

“When data protection rules are difficult to apply in practice, organizations can fall into the trap of believing that avoiding them is a pragmatic approach.”

Camilla Winlo, Head of Data Privacy, Gemserv

Others are less sure. Some experts believe the GDPR is stifling data innovation and technology adoption. A report released earlier this month by the National Bureau of Economic Research, a nonprofit research organization based in Cambridge, Mass., claimed the legislation has killed off just under a third (32.4 percent) of apps available on the Google Play Store while new Android app development has since halved due to compliance concerns and the risk of large fines.

Part of the problem might be both developers and users of new technologies are uncertain what practices would be tolerated as different DPAs have so far taken different enforcement approaches and/or prioritized specific violations as being more harmful than others. An acknowledged lack of consistency in cross-border investigations and fines also raises more questions than answers about what the safe limits of data-driven technologies should be.

As a result, said Camilla Winlo, head of data privacy at consultant Gemserv, organizations are still grappling with how to implement AI in practice.

“It can be operationally difficult to collect informed consent, and it can also be difficult to fully understand the risks associated with a processing activity and the ways individuals will react to AI-driven outcomes,” she said. “When data protection rules are difficult to apply in practice, organizations can fall into the trap of believing that avoiding them is a pragmatic approach.”

When GDPR and new tech clash

The European Union, national DPAs, and European governments are trying to resolve conflicts between the GDPR and tech development and use. Regulators are providing proactive support through sandboxes or by publishing guidance on specific topics like automated decision-making, while the European Commission is developing new regulations such as the EU Data Act, Data Governance Act, and planned legislation for AI to clarify what is expected and what will be illegal.

Several businesses have faced penalties for violating citizens’ data rights when using new technology. In 2019, the Dutch DPA fined a company 725,000 euros (then-U.S. $798,000) for using biometric (fingerprint) data when less intrusive means were available.

In February, Budapest Bank was fined approximately €650,000 (then-U.S. $742,000)—the highest penalty imposed by Hungary’s DPA—for using voice-analysis AI systems to assess the emotional state of customers who telephoned its call center and monitor complaint handling. While recording and storing the call data was not illegal, the regulator said the bank did not carry out a data impact assessment to mitigate potential high risks to customers, who were not informed or allowed to object about the use of AI.

Earlier this week, the U.K. Information Commissioner’s Office fined Clearview AI more than 7.5 million pounds (U.S. $9.4 million) for collecting people’s images from internet and social media sites without their knowledge or consent to create a global online database that could be used for facial recognition in criminal investigations.

To avoid violating the GDPR when using technologies such as biometrics, AI, and machine learning, Sharad Patel, partner at PA Consulting, said companies need to improve their awareness of regulatory enforcement trends by “continuously horizon scanning, understanding what new regulations, guidance, and fines have been published by the regulators in the geographies they operate and process data in and understanding the impact on their own organizations.”

He added compliance departments need to be aware of the technologies being used across their organizations, “as sometimes new AI technologies are deployed and implemented without their knowledge.” It is also important to review privacy frameworks on a regular basis (once every six months) and expand them to include items such as data ethics and fair use of AI policies and guidance, Patel said.

Further, he added, it is important all business units are made aware of privacy risks in using new technologies and provided specific guidance to avoid them before any new systems are deployed.

Lauren Wills-Dixon, solicitor and privacy legislation expert at law firm Gordons, said, “Creating an audit trail and being able to justify business decisions—rather than implementing AI and new technologies without thought—is integral to compliance and to defending claims.” She advised companies to use the tools and other resources made available to them by data regulators “to undertake appropriate assessments and fully document their analysis of the effects of these technologies on individual privacy rights as required under the GDPR.”

Richmond-Coggan, however, said the onus is on developers and the companies that use new technologies to ensure systems and processes are GDPR compliant before they start using them.

“The nature of the legislation is that it requires you to design in safeguards and protections from the ground up. It is often very difficult to achieve meaningful compliance when you are trying to bolt it on as an afterthought,” he said.

He added, “As a quick rule of thumb, I always ask clients to think about whether they would be comfortable explaining to someone what use they are making of their data. If they are not, it is usually a sign there is something that needs to be thought about again or that is an area of risk.”