The European Commission on Wednesday unveiled its long-awaited plans about how it wants to regulate artificial intelligence (AI) as well as promote greater data sharing throughout the European Union to stimulate further growth and competition in digital services.

Both propositions spell changes—and possibly trouble—for Big Tech firms: Any additional (and onerous) Europe-wide AI regulation may conflict with rules in other jurisdictions, while forcing firms like Amazon, Google, and Facebook to share anonymized data with smaller firms may threaten their dominant positions, as well as increase regulatory scrutiny over the way they collect, share, and sell that data in the first place.

“A key focus is on making the market for data and AI much more competitive, removing as many roadblocks as possible from the path of potential challengers, and making sure Big Tech pays more tax in Europe. This is fighting talk, aimed at protecting EU interests in a world where Silicon Valley and Chinese tech firms are perceived to have won the initial battles for AI and data supremacy.”

Georgina Kon, TMT Partner, Linklaters

“The Big Tech firms cannot ignore the EU’s rules as the market is too large to be excluded,” says Scott Morrison, director at consultancy Berkeley Research Group. “As new laws and regulations are put in place, these firms will undoubtedly respond and may need to change their processes in the EU or even globally.”

Some experts, however, warn there can also be negative effects if the European Union tries to set itself too far apart from the rest of the world in terms of tech regulation. “Too many jurisdictional fragmentations make it difficult for multinational companies to have a cohesive corporate strategy,” says Helga Turku, privacy and data protection consultant at HewardMills.

At the heart of the proposals, the Commission wants to make it clear the same strict rules that govern traditional business models—consumer protection, fair competition, and data privacy—must still apply to the digital market, and that data should be seen as a tradeable commodity in much the same way as any other goods.

The white paper on AI, released Wednesday and open for consultation until May 19, sets out the EU’s preferred options for a legislative framework for “trustworthy” AI to ensure the technology is not “misused.” It also says there need to be safeguards so victims of AI misuse can seek financial redress (though the Commission stops short of attempting to rewrite liability rules).

Among the changes the Commission wants to introduce is a requirement that unbiased data is used to train high-risk systems so they “perform properly” and “ensure respect of fundamental rights, in particular non-discrimination.”

It also wants to make a distinction between “high-risk” and “low-risk” AI and suggests regulation targeted at the former needs to be flexible enough to allow low-risk AI tech innovation to flourish. “Clear rules need to address high-risk AI systems without putting too much burden on less risky ones,” says the Commission.

EU rejects Facebook’s plan for regulation

 

The Big Tech firms have been fairly quiet since the European Union announced its data strategy Wednesday.

 

However, just days before, Facebook founder and CEO Mark Zuckerberg had tried to push for an alternate system of regulation when he appeared in Brussels on Monday.

 

In a white paper called Charting the Way Forward: Online Content Regulation, Zuckerberg said there needed to be a global view of what is permissible rather than different national views, and that internet companies should not face any liability for content on their platforms (as this would limit free speech).

 

He added the best way to ensure an appropriate balance of safety and freedom of expression is to require internet companies to maintain systems and procedures to combat illegal content, as well as have a requirement for them to periodically publish enforcement decisions regulators have handed against them. Regulators could also suggest “performance targets,” such as monitoring how long it takes for tech firms to remove hate speech or other harmful content and whether their removal was fast enough.

 

EU commissioners were left unimpressed. Thierry Breton, the commissioner overseeing the EU’s data strategy, said Zuckerberg’s plan was “not enough. It’s too slow, it’s too low in terms of responsibility and regulation,” adding that it avoided other European concerns with the way the social media firm operates—namely, its market dominance.

 

Vera Jourova, the commission’s vice-president in charge of transparency and values, said the proposal ignored EU concerns over the lack of transparency regarding how some of Facebook’s algorithms work.

 

Facebook was scheduled to release a press statement in reply to the launch of the EU’s data strategy Wednesday but has so far referred journalists to its white paper instead.

For high-risk cases, such as in health, policing, or transport, the Commission says AI systems should be “transparent, traceable and guarantee human oversight.” Authorities should be able to test and certify the data used by algorithms in the same way as they would check cosmetics, cars, or toys, for example.

For lower-risk AI applications, the Commission envisages a voluntary labelling scheme if developers apply higher standards.

The Commission stopped short of banning the use of facial recognition software in public areas, however, even though an earlier draft of the paper had suggested doing so.

The EU’s executive body says all AI applications are welcome in the European market “as long as they comply with EU rules”—a reference to the fact that many tech firms are still more keenly focused on producing, marketing, and exploiting apps and other data-reliant tech solutions than checking to see if they comply with data privacy or competition rules before release.

A follow-up white paper on safety, liability, fundamental rights, and data around AI use will be released in the fourth quarter of this year, says the Commission. Additionally, the European Union plans by the end of the year to introduce a Digital Services Act, which will impose more responsibility upon online platforms to monitor their content and services to protect users’ rights. The European Union will also be examining ways to get Big Tech firms to pay more tax on revenue generated within the single market.

While the European Union wants to introduce rules to ensure AI development and use remains ethical, the overarching thrust of its data strategy is to open the market up, particularly for smaller European firms to gain access. And to get a foot in the door, the Commission makes it clear Big Tech will need to learn to share—possibly on a “FRAND” (fair, reasonable, and non-discriminatory) basis, as is used in patent licensing.

“Data should be available to all—whether public or private, big or small, start-up or giant,” says the Commission.

“In the digital age, ensuring a level playing field for businesses, big and small, is more important than ever,” it adds. “Some platforms have acquired significant scale, which effectively allows them to act as private gatekeepers to markets, customers and information. We must ensure that the systemic role of certain online platforms and the market power they acquire will not put in danger the fairness and openness of our markets.”

While the European Union may lead the way in data regulation, it lags behind in terms of creating tech giants. Of the 10 largest technology firms currently operating, eight come from the United States and two from Asia.

To stimulate European tech growth, the Commission believes there needs to be greater data sharing and reuse, particularly with regards to that held by public bodies. As such, it wants to develop a legislative framework and operating standards to create “European data spaces” that would allow businesses, governments, and researchers to pool, store, and access each other’s data to help foster further AI and tech innovation in areas like industrial manufacturing, healthcare, and green technologies.

“Non-personal data can underpin the design and development of new, more efficient and more sustainable products and services, and they can be reproduced at virtually no cost,” said Commission President Ursula von der Leyen in a statement. “Yet today, 85 percent of the information we produce is left unused. This needs to change.”

The EU’s announcements have divided opinion among lawyers. Georgina Kon, technology, media, and telecoms partner at law firm Linklaters, believes the white paper “contains a clear warning to Big Tech that life as they know it is about to get a shake-up in the EU.”

“A key focus is on making the market for data and AI much more competitive, removing as many roadblocks as possible from the path of potential challengers, and making sure Big Tech pays more tax in Europe,” adds Kon. “This is fighting talk, aimed at protecting EU interests in a world where Silicon Valley and Chinese tech firms are perceived to have won the initial battles for AI and data supremacy.”

Ryan Dunleavy, partner at law firm Stewarts and head of its media disputes department, says that while the proposals raised in the white paper are useful, he believes they are “just the start on a long road to legal changes” and “lack sufficient detail.” For instance, he says, the white paper “only touches” on facial recognition technology and “seems to take a weak approach to it,” despite the fact it is being rapidly rolled out and “poses many existing regulatory and legal conundrums, particularly regarding data and privacy.”

He also queries the practicalities of the European Union imposing higher legal standards than the United Kingdom and the United States on AI for companies that are often multinational and operating cross-border.

“Will companies need to modify their existing AI solely for Europe, and if so, how? Alternatively, will other jurisdictions start raising their legal standards?” says Dunleavy. Until the rules around AI governance become more standardized internationally, he says, “companies are likely to need to take a mosaic approach to developing regulations and legislation affecting AI in different jurisdictions.”