In our compliance discourse, there’s no shortage of debate on whether policies, rules, and procedures, including global laws and regulations, are enough to keep people and teams on the right side of ethical, lawful, and compliant decision-making.

When I read “The Behavioral Code: The Hidden Ways the Law Makes Us Better … or Worse” by Benjamin van Rooij and Adam Fine, I found myself immediately immersed in this well-researched and thought-out book on how laws, policies, and codes can “fail to improve human conduct.” But “The Behavioral Code” is much more than a deep dive into social research to surface the tension between codes and conduct. The entire book is an exposition on shaping laws and codes to “human and organizational behavior” that should be a foundational part of any compliance library.

I recently spoke with professor van Rooij to better understand and appreciate “The Behavioral Code.”

Behavioral Code

Q. Lately, there’s no shortage of discussion on how behavioral science can inform ethics and compliance programs. Why do you think this is occurring?

A. I think this is the result of an emerging insight that ethics and compliance programs are about more than managing firm liability. There is increased pressure to show these programs work to prevent and reduce harm. This automatically means we must figure out how they can do so, and that means we must look at how people respond to these programs. That has brought us to behavioral science.

At the same time, behavioral science has become much more accessible to a broader audience through books like ‘Nudge’ by Richard Thaler and Cass Sunstein and ‘Thinking, Fast and Slow’ by Daniel Kahneman. The core contribution of ‘The Behavioral Code’ is to show directly what behavioral science means for law, drawing from across disciplines and beyond behavioral economics these other books have popularized.

Q. In your chapter “The Moral Dimension,” you share people have “a limited capacity for ethical reasoning.” Looking at that statement from an ethics and compliance perspective, it almost seems like a roadblock. Can you share more as to what that might mean and how we can help people and teams with ethical reasoning?

A. Empirical research has shown people do not make deliberate ethical decisions as some may think they do. We do not act as philosophers thinking deep about moral dilemmas. Humans have a bounded ethicality, meaning we are sometimes not at all aware of how unethical we act, as we do so unconsciously, such as in instances of unconscious bias.

Also, we are not very good at predicting how ethically we will act, overestimating what we will do when we are in the heat of the moment when facing a true ethical issue. And we tend to have a better picture of ourselves after we act unethically. This means training people in moral dilemmas may give new insights about what right and wrong is, but these do not necessarily result in better behavioral outcomes. The focus should thus be less on ethical reasoning but on making people aware of their own bounded ethicality.

Q. With respect to ethics and compliance, your chapter “Eating Systems for Breakfast” surfaces the difference between programs that are values-oriented and those that are incentives-oriented (policies, rules, and procedures). Can you share more about why you think that’s such an important distinction and what it might mean for the engineering of a compliance program?

A. A series of studies on ethics training has found programs that focus on instilling values of integrity and ethics are more effective to reduce staff misconduct than programs that train employees in rules and procedures without focusing on the underlying values.

Later in the book, when discussing organizational culture, we do warn a focus on values training alone will also not be sufficient. In some of the worst cases of organizations found to structurally engage in illegal and damaging behavior, we see leadership did express the right values, but the internal targets, investments, and incentives were not aligned with these values. In such instances we get a so-called corporate cognitive dissonance where employees get conflicted signals; they are told they should follow higher ethical values while not enabled or factually expected to do so in everyday work.

Q. You address how “worker voice or empowerment in the adoption of such (compliance) programs is also a key for effectiveness.” Yet, in a globally disbursed and multicultural workforce, that seems to be a heavy lift in terms of trying to establish such a feedback loop. From your experience, why do you think this is so important?

A. We never said this was easy. The problem is if workers are not enabled to speak out, it will be very hard for organizations to understand what happens in everyday work practices. Without sufficient empowerment, complaints mechanisms and whistleblower protections will fail, as workers will not be able to speak out against more powerful people in their organization.

Simply giving workers the right to speak out does not enable them to do so. And when they have the right to speak out without the ability to exercise it, there is a real danger workers will be blamed when they do not report illegal behavior. So, empowerment is key.

Achieving empowerment is not easy and asks a lot of organizations. It means staff should be able to organize and, as a group, protect and represent their interests. This has been easier in Europe, where there are already legal provisions that support this. In the United States, this is much less the case.

You are right there are cultural differences and cross-national differences. But, given how vital empowerment is for a successful ethics and compliance program, I do think corporations must do all they can to improve this.

Q. In one of the most counterintuitive statements of the book, you share when compliance management programs shield “the organization from liability, such a program becomes a perverse incentive, as the firm cares less about preventing the illegal behavior that they are not liable for.” Can you help unpack that statement?

A. There is very inconclusive evidence that the existing compliance management programs have reduced misconduct. Many organizations have developed these programs in order to reduce potential liability. To truly change the programs to address and reduce misconduct would require a much deeper investment.

Not only does it require behavioral insight teams, but it also means addressing root causes of misconduct at high costs—for instance by reducing targets and workload, by investing in compliance issues like safety or environmental protection, and by enabling worker empowerment. It may be very hard for a compliance manager to convince the C-suite such investments are necessary when the program already shields the organization from liability of the damages that a much more formal and less-expensive program would do.

Q. You list seven ingredients of a “toxic cocktail” of damaging and illegal behavior. If you had to isolate one to reduce the consequences of the other six, which one might you pick?

A. The most important one would be the problem of a mismatch between organizational goals and means. This is a root cause for many of the other toxic elements. When organizations seek to achieve the impossible it will breed deviant and unethical behavior, normalize such behavior, make it harder to speak out against it, and ultimately lead to a mismatch between preached ethical values and everyday work tasks and expectations.

Thank you professor van Rooij, and here’s to all of us serving as “ambassadors of the behavioral code, initiating critical discussions about laws, rules, and human behavior.” Your book certainly equips us to be those ambassadors.