Most people understand and accept that human decision-making is tainted by bias. But what is less understood is how these human biases can creep into technology intended to streamline or improve decisions made by machines. Amazon’s recent recruitment fiasco is an instructive example.

In 2017, Amazon was forced to abandoned plans to apply artificial intelligence (AI) to the recruitment of new engineers after its computers picked up on the sexist biases of the company’s recruitment processes.

ICA

The International Compliance Association (ICA) is a professional membership and awarding body. ICA is the leading global provider of professional, certificated qualifications in anti-money laundering; governance, risk, and compliance; and financial crime prevention. ICA members are recognized globally for their commitment to best compliance practice and an enhanced professional reputation. To find out more, visit the ICA website.

The recruitment project had promised to deliver a new era in Amazon’s talent acquisition, applying cold reasoning to the resumes it received and selecting those candidates that, based on historical analysis, would mature into the world-beating employees of the future. The team looked at the successful engineers it had recruited in the past and fed their resumes into the algorithm. The machine-learning software would then extract the parameters that mattered from those successful resumes and look for those attributes in the resumes of new applicants.

There was, however, one crucial oversight: 75 percent of engineering management at Amazon was male. The decision tree the software had extracted from Amazon’s past recruitment practices thus penalized any mention of the word “women.”

Such biases lead human beings to make decisions that are irrational while having no self-awareness of that irrationality. It is a vulnerability that can affect the most innocuous areas of decision-making. As an example, let’s say you have the choice between two beers: a lower-quality one for $5 and a medium-quality one at $5.30. About 20 percent of people choose the lower quality beer. If, however, we introduce a premium-quality beer at $5.60 into the choice, nobody buys the lower-quality beer. The choices shift upward in this “decoy effect,” with pretty much everyone now buying the medium-quality option.

How does this affect compliance?

From a compliance officer’s perspective, this has clear customer outcome implications. Consider the add-on insurance market. After having answered numerous questions connected with their needs for a primary insurance product, the customer is presented with a quote. In the case of car insurance, this could cost in the hundreds.

At that point, adding an extra protection product at around $20 seems, in comparison to the main product, good value. Such purchases can occur without the customer really understanding what it is they have purchased.

To complicate things further, humans also select information—and look for patterns in data—that confirm existing beliefs, known as confirmation bias. Let’s say I have a rule that I am using to generate a series of numbers—2:4:6—and I ask you to guess this rule. Your guesses must be another three-number series, and I will tell you if that series agrees with the rule or not. What tends to happen in this test is people come up with a theory of what that rule is, then generate number sequences to prove that rule (8:10:12, for example). Very few people discover the rule, it being any sequence of ascending numbers (e.g., 1:132:1035). The reason? People tend to generate guesses that prove their theory—they don’t try sequences that would disprove it.

Organizations need to take this irrationality and bias into account when considering tone from the top. In a recent survey of ethics among Swedish companies, 67 percent of senior managers agreed with the statement “unethical behavior is disciplined in my organization.” Only 39 percent of employees below those senior managers agreed. If senior managers feel themselves responsible for the ethical culture below them, it is in their interests to believe it is being enforced, even when it isn’t.

Confirmation bias will result in our executive teams only looking for data that supports what they already believe. At best, this leads to an incomplete survey of alternatives—even replicated in AI systems, as the Amazon example reveals. At worst, it results in a culture where challenge is deemed unacceptable and those who question received wisdom are considered heretics.

Challenging the narrative

As knowledge of biases inherent in decision-making grows, many firms are taking concrete steps to counteract it within their organization. One effective way is by creating a “risk and compliance function mandate,” placing an explicit expectation on the second-line functions to play a “critical friend” role. A mandate, agreed by the board, can confer rights such as unconditional access to information, right of veto, and the right to appoint external expert review. Such a clear and unambiguous statement—that the second line is there to challenge—can act as a counterbalance to bias. This can be particularly powerful when combined with a remit to collect information and data in a way that is free from the objectives and remuneration of the rest of the executive team.

As risk and compliance professionals, it falls upon us, as individuals, to hold firm to our independent roles. This can mean challenging accepted norms and offering alternative views in the face of prized, and widely held, beliefs. It requires bravery in the face of opposition, a willingness to walk toward issues when others are walking away, and the courage to hold our ground with CEOs and boards.

Counteracting bias is a part of our job description, and, as the Amazon example shows, it will be up to us to point out when we spot it in technology. Only then will the decisions our firms make be fair, equitable, and truly free of bias.

The International Compliance Association is a sister company to Compliance Week. Both organizations are under the umbrella of Wilmington plc.