During his keynote address at the recent Compliance Week 2019 conference in Washington D.C., Preet Bharara, former U.S. Attorney for the Southern District of New York, invoked a word that may still be too infrequently used in compliance circles: “psychology.”

“I was talking to the general counsel of a Fortune 50 company,” he recalled. “She said that, at this huge company she worked for, when she has to think about how to motivate people to change their behaviors—and guide them into good behaviors—she found herself relying a lot more on her psychology degree than her law degree.”

As the anecdote reveals, there is a growing realization that data analysis, devoid of psychological and behavioral analysis, might not be as effective as hoped or intended.

There can be direct and targeted applications of a compliance function’s data collection—monitoring expenses, gift-giving, and customer due diligence—but a meaningful shift to cultural and predictive analysis can be problematic if devoid of behavioral considerations. Those failings will be even more pronounced as analysis turns inward to employees and business partners.

Throughout the Compliance Week conference, data analysis repeatedly resurfaced as a topic, one both loaded with benefits but also fraught with potential failure.

Among those discussions was a keynote address by Georgetown Law Professor Donald Langevoort.

In his opinion, “good compliance programs are built on predictions about how managers and employees act in the face of temptations and pressures—that behavior is not always rational.” Human psychology, he says, will always be a factor when employees (or others) ponder a path along ethical lines, regardless of all the rules, protocols, policies, and internal controls compliance functions are so universally focused on.

“Our code of conduct is literally a storefront. How do I get people to even stop and look at the storefront? How’s the store set up? How’s my code organized by the risk areas? Once they open my code, how do they know they’re grabbing the right tool, or training, or wherever may be that they can use?”

Kurt Drake, Chief Ethics and Compliance Officer, Kimberly-Clark

Compliance Week Editor in Chief Dave Lefort provided this analysis of an example used by Langevoort:

“A large insurance company found that its churn rate for policies was out of control, so compliance stepped in and put in place a new set of rules and identified exactly what it would do to police those rules. Namely, it indicated that any policy turned over within 90 days would be subject to review.

“How did the sales team interpret that? To them, compliance essentially gave them a roadmap to avoid getting caught: Wait until the 91st day before making the new sale. That wasn’t the message compliance intended to send, but that’s how it was interpreted by a sales team that was still incentivized based on the number of policies sold. They were still financially incentivized to maximize sales, so the message they received was along the lines of, ‘OK, they want us to keep doing this, but they gave us a heads up on how to do it in a way where we won’t get caught.’”

That question of incentivization was also present in one of the most disheartening examples of groupthink to ever harm the reputation of a banking giant. Wells Fargo paid $575 million to resolve, among other claims, that thousands of employees fraudulently opened millions of fake customer accounts, online services, and credit cards to meet sales goals. The psychological impetus behind that mass fraud will no doubt fuel behavioral research for years to come.

Compliance and its codes of conduct, in both examples, were rationalized away en masse, with each step to “the dark side” becoming easier for rank-and-file employees to commit to and defend.

Training, codes of conduct reimagined

Kurt Drake, chief ethics and compliance officer at Kimberly-Clark, saw the importance of grasping the psychology of his target audience, and the need for better training to anticipate rogue behaviors, during his quest to create an interactive code of conduct.

“Our code of conduct is literally a storefront,” he explained. “How do I get people to even stop and look at the storefront? How’s the store set up? How’s my code organized by the risk areas? Once they open my code, how do they know they’re grabbing the right tool, or training, or wherever may be that they can use?”

“Bringing it back to the notion of engagement and positive behavior change,” he added. “We don’t change the behavior of an audience, but you can seek to change behavior of individuals, if you know how to target them.”

As for his interactive code of conduct, Drake’s hope is that it sparks a more effective use of behavioral analysis.

“We are able to capture a lot of information about how employees are using this tool,” Drake said. “Employees aren’t necessarily being pushed here, but the kinds of issues our employees are wondering about and experiencing day to day can be thought of as an always-on risk assessment. It’s not the only source of information, but if we see a particular region of the world clicking on the conflict of interest policy more than usual, maybe that’s something we should look into.”

“It really gives us this opportunity to better understand, from a risk perspective, the pulse of the organization and how we can help our employees,” he added. “This is not a tool to try to find out what people are doing wrong. Quite the opposite. It’s a tool to try to help people do what’s right. Where might there be questions or concerns? How we can we provide the information proactively to get them what they might need? How do we understand what these different data sources are telling us?”

Drake’s ultimate goal: “the right content to the right employee at the right time.”

A double-edged sword

There are potential pitfalls as data analysis takes on a behavioral bent. For example, artificial intelligence algorithms can have a human programmer’s bias buried deep within their operational code.

“One of the things that a lot of people often struggle with is not all data is created equal,” said Carl Hahn, VP and chief compliance officer for aerospace and defense contractor Northrop Grumman. “Data does have this sort of garbage-in, garbage-out propensity at times.”

Matt Gavin is the global VP for ethics and compliance for AB-InBev, the world’s largest beer brewer. “We’re the largest taxpayer in 11 different African countries. We have communities of workers and full supply chains that are fully part of the world where we are,” he said. “As much as we’re global, our risk footprint from a compliance perspective—and I’m managing corruption and managing competition, money laundering investigations, and data privacy—is very local.

“Part of our journey into analytics was having that very local footprint and asking how we could strategically build an ethics program, not just a compliance program? How could we use analytics to do that?”

“Even when you’ve got a good data set, you can even find surprises,” he added. You’re going to find gaps you didn’t know you had. You’re going to be making assumptions based on a data set you thought was complete but wasn’t. You are going to identify data pollution.

“I read an interview with Facebook’s chief technology officer about his struggles with artificial intelligence, and he said, ‘We are having a hard time telling the difference between a head of broccoli and cannabis buds, we need to figure that out.’ Now, I feel a little better now about my data challenges.”