Next month Compliance Week will be hosting a virtual conference on Big Data. Part of my job here is to recruit high-caliber speakers to talk about various slices of that admittedly broad subject, and hence I ended up on the phone last week with Kord Davis.
Davis is author of the book Ethics of Big Data: Balancing Risk and Innovation (O'Reilly Media, 2012), and yes, he'll be exploring some of those ethical challenges during our virtual event. Other speakers will tackle more practical matters, like how you can use Big Data to improve auditing or root out fraud. I told Davis, however—a philosophy major at Reed College turned tech consultant for the last 20 years—that I wanted him to focus on the grey areas created by Big Data, where companies have the ability to do all sort of morally questionable things, yet the law is nowhere near sufficient to address whether they should do those things.
“Oh yes,” he answered. “This is a big problem.”
I've expressed my own fears about the ethical implications of Big Data before. Like most people, I cited the now infamous example of Target correctly deducing that a teen-aged customer was pregnant based on her product purchases, and then sending coupons for maternity products to her home—much to the surprise of the girl's parents, who had no clue about their impending grandchild. Illegal? No. Icky? Yes. Unethical? Well, I dunno. So I asked Davis for his opinion about situations like that.
“It goes like this,” he said in a gush. “The IT guys come into the conference room and say, ‘Here's what we can do,' and they show everyone all this amazing data-crunching potential. Bob the marketing guy says, ‘Wow, this is great!' And Joe the marketing guy says, ‘Um, no, this is creepy.' That conversation goes back and forth—some people saying Big Data is fantastic, and others saying it's creepy—until somebody decides to act unilaterally and just does it.”
“What happens,” Davis continued, “is that we have no common vocabulary yet for the moral values here. So everyone retreats into his own moral code, to help them decide what's right or wrong. That's where it gets messy.”
Everyone retreats into his own moral code. Any time a chief compliance officer hears a phrase like that, he or she should go on red alert. And I could not be more pleased with Davis' choice of words, because it frames the problem Big Data poses to compliance professionals perfectly.
After all, take what Davis is saying about Big Data—that we have no common definitions of what is or isn't ethical conduct in that world, so everyone retreats to his own ethical values and reaches his own conclusions about what's acceptable—and apply it to another subject dear to compliance officers' hearts: anti-corruption. Once you do that, you can see why we've made such effective strides to fight bribery and build effective anti-corruption programs. You also see how far we have to go before we can say the same about Big Data.
If you want to know what an “effective compliance program” means for anti-corruption, you have one-stop shopping: the U.S. Sentencing Guidelines, which spell out the government's expectations for effective compliance in detail. If you want to know what makes one payoff a bribe and another a facilitation payment, or if you want to who counts as a foreign official, or whether you can whisk that person to the United States for a tour of your manufacturing plant, you can peruse the Justice Department's index of opinions on the Foreign Corrupt Practices Act, organized by date and topic.
Above all, we all know what a bribe is: a payment made so someone will do something he might not otherwise normally do. Employees might still pay bribes despite all the FCPA compliance programs in the world, but at least they'll know they're acting unethically when they do.
None of that ethics infrastructure exists for Big Data yet. For example, most of the privacy violations that Big Data might allow will be civil rather than criminal, so you might have any number of agencies imposing settlements against a company, rather than that central voice of the Justice Department spelling out expectations for the FCPA. (Worse, each agency will be cracking down on different piles of data collected for different reasons, probably used by your employees in different ways. So much for breaking down silos.) Even the Sentencing Guidelines themselves only exist to help a company prevent misconduct—and we haven't clearly defined what misconduct in a Big Data world is, much less how far companies should go to prevent it.
Indeed, my comparison of FCPA compliance and “Big Data compliance” really only goes so far; the two pose fundamentally different questions about the limits of human behavior. Bribery and corruption are all about how low human morality can sink, and it can sink pretty low. Big Data is all about how you can behave when computers enhance your ability to act beyond what the human mind can normally accomplish. That's a wholly different issue.
You can outlaw bribery in pretty much every instance, because bribery is pretty much always bad. You can't outlaw Big Data because Big Data isn't always bad; in some ways, it's a wonderful business tool. There will be times when we want to use Big Data and times when we won't, and anyone who believes those situations will happen so clearly and regularly that we can put laws and regulation around them is fooling himself. We won't.
Rather, we'll need to exercise restraint—something humans have show themselves to be quite poor at doing when the profit motive is involved. We'll need to articulate common values and standards about how we act in a world where Big Data will let us do more than we ever imagined, and that's where compliance executives can come rushing into the picture. Our March 27 event is a good place to start, but obviously, we have a long way to go after that.