Facebook’s top executive in charge of developing artificial intelligence believes regulators should not target advancements in AI—they should instead focus on how the technology is used.

“I am generally in favor of regulating a particular application rather than a technology,” Yann LeCun said in an interview last month. LeCun also defended the continued use of facial recognition, saying regulators need to discriminate between applications that serve “good purposes” and those that don’t.

Speaking in response to the European Commission’s legislative proposal to ensure trustworthy AI announced in April, LeCun might have a point: unsurprisingly, other Big Tech firms have been quick to make similar pleas.

Generally, the technology sector has been accused of attempting to stall the debate of regulating AI by claiming there is no universally agreed definition of what “AI” is. Some experts have said even a basic Excel spreadsheet could qualify as AI in the widest sense of the term.

But Big Tech firms are not necessarily united in their reservations about potential future AI regulation. Google, for example, has repeatedly cautioned the European Union to use current European laws to govern the use of AI, including the General Data Protection Regulation (GDPR), rather than draft new legislation. Google has warned a “one-size-fits-all” regulatory framework will be difficult to comply with—and enforce—because of the technology’s diverse applications.

Microsoft, on the other hand, said in its feedback it favored a mix of binding requirements for (unspecified) “high-risk” AI applications and a “soft law”/self-regulatory approach for those deemed “lower risk.” This is more in step with the Commission’s thinking.

“By communicating a tone from the top that promotes transparency and ethical behavior around AI, you can build stronger and more equitable algorithms and reduce the likelihood of regulatory risks.”

Shannon Yavorsky, Partner, Orrick

The push by the European Union to regulate AI is likely to be followed by other countries—but not necessarily in the same manner or to the same extent.

In the meantime, companies should continue to lobby lawmakers and “join in the debate about how law and regulations evolve to catch up with technology,” says Jonathan Osborne, an attorney at legal network Globalaw’s Florida firm Gunster. Those intending to adopt AI should question what personal data will be used in the process and how, Osborne says.

Nigel Jones, co-founder of The Privacy Compliance Hub and an ex-head of legal for Google in Europe, says, “By saying it will ban anything that is a clear threat to EU citizens, the European Commission isn’t saying anything controversial. It is the equivalent of saying it won’t allow a car on the road that hasn’t passed its vehicle safety test.”

To be safe, Jones recommends companies using AI should think about the principles behind responsible use and “ask themselves whether they are being accountable, transparent, trustworthy, nondiscriminatory, and secure and whether they can demonstrate that. They should also make sure they think of the possible unwanted consequences for individuals and whether they can respond to the existing rights those individuals have under the GDPR in relation to their personal data.”

Managing the risks

To use AI responsibly, experts recommend several practical steps to ensure transparency and “explainable AI” are at the heart of any system. First, says Taras Firman, data science competency manager at software consultancy ELEKS, before implementing any AI systems, “companies should verify and check the authenticity of the data and where the data is coming from.”

Under the proposed EU regulation, data sets must be “relevant, representative, and free of errors and complete.” It is therefore critical to build and iterate AI models based on fulsome datasets that include all necessary populations to achieve accuracy and fairness.

“Data subjects will also need to be aware of how their data will be used and the data fed into any AI model will be limited by data rights,” says Firman. Companies should use AI systems that are designed so they can easily delete or replace certain parts of the data, he adds.

Shannon Yavorsky, partner at law firm Orrick, says companies should consider having technical documentation, policies, and automatic logs in place that are followed by employees who build, test, and interact with AI systems. They should also create instruction manuals for any AI system that accurately describe its intended operation, “to prove your good intentions should your AI system be accused of bias or otherwise attract regulatory scrutiny,” she says.

Additionally, companies should conduct AI risk monitoring to uncover faults and carry out corrective actions as soon as practicable. Yavorsky also recommends they establish an in-house voluntary code of conduct for AI, as well as promote “ethics by design” in any program build.

“Establishing a voluntary code of conduct, even for low-risk AI systems, provides a roadmap for your AI team as well as business and legal personnel to build and manage compliance as you roll out AI offerings,” says Yavorsky.

“By communicating a tone from the top that promotes transparency and ethical behavior around AI, you can build stronger and more equitable algorithms and reduce the likelihood of regulatory risks,” she adds.