For more than a decade, the European Union has put regulating artificial intelligence (AI) at the heart of its digital agenda. Beginning next year, a raft of legislation is set to take effect.

Aimed at protecting consumers from any unintended consequences AI technologies and machine learning (ML) might create, the AI Act primarily targets the largest tech platforms capable of the greatest harms while also seeking to level the playing field so new companies can enter the market without being swallowed up. The rules will allow businesses generally to flag unfair practices, such as Big Tech firms’ use (and abuse) of targeted advertising to drive sales from customer data without users’ consent.

The European Commission—the bloc’s executive body—published its draft proposal for the AI Act in April 2021. The legislation, which is industry neutral and has extraterritorial application, seeks to remedy existing fragmentation in the regulation of AI across the European Union, as well as address concerns around potential risks posed by unregulated uses of AI-based technologies.

The AI Act follows a risk-based approach and regulates AI systems in accordance with the level of risk they present. There are four bands:

  • “Minimal” risks, where the risk of harm is so low such systems are not regulated under the act.
  • “Limited” risks, which are subject to certain transparency requirements but should pose no serious levels of harm.
  • “High” risks, which could include those that evaluate consumer creditworthiness, assist with recruiting or managing employees, or use biometric identification.
  • “Unacceptable” risks, which are so potentially harmful they are banned from use.

Companies breaching the rules would face fines up to 6 percent of global turnover or 30 million euros (U.S. $30 million), whichever is the higher figure.

The AI Act is expected to become law in 2023 (with a transition period).

Three-part package

U.K. charts its own path on AI regulation

In July, the U.K. government announced its proposals for the future regulation of artificial intelligence (AI) innovation and how companies should use the technology responsibly.

 

Instead of giving responsibility for AI governance to a central regulatory body (as the European Union is doing through its AI Act), the U.K.’s proposals will allow different regulators to interpret and implement the principles more flexibly, as well as take a “tailored approach” to the use of AI in a range of settings to better reflect how companies in various sectors use the technology.

 

Regulators, including Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority, and the Medicine and Healthcare Products Regulatory Agency, will be encouraged to consider “lighter touch options,” which could include guidance, voluntary measures, or creating sandboxes for developers to test their AI tech before introducing it to market.

 

Regulators should focus on “high-risk” concerns rather than “hypothetical” or “low risks” associated with AI. The government also wants regulators to work within existing processes rather than create new ones.

 

The six core principles of the proposals will require developers and users to:

  • Ensure AI is used safely;
  • Ensure AI is technically secure and functions as designed;
  • Ensure AI is appropriately transparent and explainable;
  • Consider fairness;
  • Identify a legal person to be responsible for AI; and
  • Clarify routes to redress or contestability.

The feedback period ends Sept. 26.

The planned AI Act will be supplemented by two other pieces of legislation: the Digital Markets Act (DMA) and Digital Services Act (DSA).

The DMA aims to regulate online digital platforms designated as “gatekeepers”—essentially, large/dominant social media or cloud computing companies with 45 million or more active monthly users in the European Union, EU revenues of at least €7.5 billion (U.S. $7.5 billion), or a market capitalization of at least €75 billion (U.S. $75 billion). The law is designed to impose limitations on how gatekeepers might process data; determine the implementation of interoperability interfaces; and enhance consumers and business users’ rights.

The DSA imposes new obligations on online intermediaries, such as hosting services providers and social media platforms, regarding user-generated content made available through their services. It maintains intermediaries and platforms are exempted from liability related to the online dissemination of user-generated content, so long as they comply with content moderation obligations and take down any illegal content detected on their services “without undue delay.”

The final version of the DMA was approved in July and is expected to become applicable by April 2023. Gatekeepers are expected to comply with the law’s obligations and requirements by February 2024.

The DSA is expected to take effect toward the end of 2022 and become fully applicable by mid-2024.

Legal experts believe the three upcoming laws are aimed at primarily holding global tech firms accountable for practices that have so far evaded competition and data regulators.

William Long, global co-leader of law firm Sidley’s privacy and cybersecurity practice and head of the EU data protection group, said companies will need to “determine whether they fall within scope of one or more of these digital laws” and “start assessing what effect they may have on the business and what the impact may be from a compliance, product, and resources perspective.”

While the AI Act, DSA, or DMA might not apply to them, companies utilizing AI and ML solutions as part of their operations or decision-making processes could still be subject to the EU’s General Data Protection Regulation (GDPR) if they are using EU citizens’ data. Noncompliance with the GDPR offers the threat of similarly eye-watering fines.

Experts warn of risks when businesses ignore “danger signs” and do not carry out their usual checks, monitoring, and risk management reviews because they believe tech firms will be liable for potential failings linked to the technology’s design rather companies for the way they use it.

“When it comes to data, and AI specifically, legislation will only continue to evolve as the AI and ML sectors continue to innovate at pace. Instead of planning to just about cross the compliance line, businesses should be changing their behavior to far exceed it.”

Caroline Carruthers, Chief Executive, Carruthers and Jackson

As such, all companies using AI should establish a comprehensive risk management program integrated within their business operations. The program should include an inventory of all AI systems used by the organization, a risk classification system, risk mitigation measures, independent audits, data risk management processes, and an AI governance structure.

“AI-generated content or use of automated decision-making in the calculation of online financial products or product pricing, for example, will require its own controls and monitoring to ensure outcomes are fair and appropriate,” said Robert Grosvenor, managing director with management consultancy firm Alvarez & Marsal’s disputes and investigations practice.

Companies should also be transparent about the purposes they want to use AI technologies for and have clear reporting structures that allow for multiple checks of the AI system before it goes live, according to an insights article by a group of legal representatives at business consultancy McKinsey. Given many AI systems process sensitive personal data, companies should have robust, GDPR-compliant data privacy and cybersecurity risk management protocols in place.

Caroline Carruthers, chief executive and co-founder of global data consultancy Carruthers and Jackson, said the first point companies need to understand is whether anything in their organizations will have to directly change because of the legislation. Her advice: follow the spirit of the law rather than the letter.

“Understanding how to approach data ethically and not waiting for legislation to explicitly tell you how to act appropriately will ensure organizations can futureproof their compliance,” said Carruthers. “When it comes to data, and AI specifically, legislation will only continue to evolve as the AI and ML sectors continue to innovate at pace. Instead of planning to just about cross the compliance line, businesses should be changing their behavior to far exceed it.”