So, your company wants to dabble in artificial intelligence (AI).

Maybe your organization has a team of developers building an industry-leading AI from scratch. Maybe your firm’s human resources department wants to procure a recruiting engine to ease the hiring process. No matter the objective function of AI, the prospect of automating a defined task opens the door to new efficiencies, saving your company valuable time and money. 

It all sounds wonderful from a utopian worldview.

Ethical Machines

As Reid Blackman illustrates at length in his new book, “Ethical Machines,” the ethical risks of deploying AI are vast. In certain use cases, they are unforeseeable. In others, they emerge sideways, even when consciously anticipated.

Take Amazon, for instance: a company that realized the ethical risks of AI development the hard (i.e., expensive and reputationally damaging) way.

A team of Amazon engineers developed a résumé-reading AI to aid humans in the task of vetting tens of thousands of résumés per day, Blackman detailed in his book. The team trained the AI on a decade’s worth of hiring data and told the machine to look for the “interview worthy” pattern.

“Women are not interview worthy,” was the pattern it spat out.

The AI was biased. More to the point, it learned to be biased based on the troves of labeled data that were input. Amazon ultimately scrapped its recruiting engine, despite subsequent efforts to eliminate discriminatory inputs. There was no guarantee the AI would not uncover other patterns in the data that could lead to discriminatory judgments.

Machine learning (ML), a subset of AI, is “software that learns by example,” Blackman told Compliance Week in an interview. Importantly, ML is not an ethically neutral tool, he argued. It’s not like, say, a screwdriver.

“The people who commission, design, develop, deploy, and approve an AI are quite unlike those who make screwdrivers,” Blackman wrote. “When you develop AI, you are developing ethical—or unethical—machines.”

Thus, it is the people—collectively, the company—involved in each stage of an AI’s life cycle that are liable for its learnings, decisions, and impacts, be they intended or not. Blackman’s book advocates for companies to adopt a comprehensive AI ethics program that infuses thoughtful decision-making into every stage of an AI’s development, from conception to deployment, and engages a cross-functional team of experts (not just “techies”) to avoid expensive dead ends like Amazon’s—or worse.

Bias, explainability, privacy—where compliance comes in

With a dry humor that permeates the book, Blackman poked fun at the three most talked about problems in AI ethics: bias, explainability, and privacy. He didn’t joke about the issues themselves; those are real. In fact, he devoted entire chapters to demystifying and outlining steps to tackle them. But he did joke about the clichéd way people talk about them.

Everyone knows they are critical challenges, but the alleged “subjectivity” of ethics muddies the conversation, Blackman observed—to the point where people shrug their shoulders and abandon the debate, allowing confusion to win the day. He made a compelling case that ethics is not subjective when it comes to deploying AI within an organization. Company leaders need to be clear on what they are willing to stand for and risk ethically in the name of AI. As such, different companies will have different appetites for ethical risk.

“One thing I try to stress is that it’s ethical risk mitigation, not ethical risk elimination. You have to mitigate that risk relative to other kinds of risks, like straightforward, bottom-line, profit risks,” Blackman told CW.

“I’m not trying to get them to radically rethink their priorities around ethics. I’m getting them to understand there are real risks here that need to be brought into the deliberative process, which should surely include compliance officers,” he added.

Here is a brief rundown of the Big Three, according to “Ethical Machines”:

  • Bias: As seen in the Amazon example, bias results when an ML creates a set of discriminatory outputs or automated decisions that range from ethically problematic to egregious according to how those decisions impact people. Along with gender bias, think racial profiling.
  • Explainability: The degree to which an ML’s algorithm (i.e., what happens between its inputs and outputs) can be deciphered by humans. A “black box” ML is unexplainable; the pattern the machine identifies is too complex for humans to understand. This issue becomes important when people are owed an explanation for an ML’s decisions on a human decency level. For instance, a candidate denied a mortgage or parole based on an AI’s decision might reasonably demand an intelligible explanation.
  • Privacy: Data is the basis upon which an ML is trained. The more, the better. The issue of privacy arises when data is collected on individuals without their knowledge or consent. Moreover, an ML trained on personal data will learn to make decisions that also threaten to invade people’s privacy. Think facial recognition software.

Compliance could, and should, play an oversight role in these challenge areas, Blackman recommended. He advised any company considering an investment in AI to form a cross-functional AI ethics committee and include someone from compliance on the board (in addition to data scientists, subject matter experts, and individuals with business and legal expertise).

“It’s not that a compliance officer needs to be involved in every single project. It’s that they need to be involved whenever ethical smoke is detected,” Blackman explained. Formally involving compliance in the AI’s ethical risk due diligence process also helps to ensure “there’s a procedure for getting the right people to take a look at [the potential ethical risks] to see if there actually is a fire there,” he said.

“It’s not that a compliance officer needs to be involved in every single project. It’s that they need to be involved whenever ethical smoke is detected.”

Reid Blackman, author of “Ethical Machines”

Most obviously, compliance officers will be involved in ensuring companies meet AI regulatory requirements coming down the pike. The European Commission proposed a draft regulation on AI in April 2021, laying out a legal framework that will impose a broad range of requirements on both private and public sectors.

What kinds of regulations would the AI Act involve?

“Those three big issues that I raise—the challenges of bias, explainability, and privacy—will 100 percent show up in regulations in the near future,” Blackman said.

He believes the EU’s AI Act will take effect in three years’ time.

“Three might sound like a lot, but when you’re talking about the level of organizational change that’s required to actually be compliant, you don’t start that six months before the regulations roll out. We’re talking about training tens of thousands of people. We’re talking about updating policies, [key performance indicators], infrastructure, and governance. It’s a big lift.”

“Ethical Machines” is a good place to start.