There is a long-running Internet joke that anytime somebody speaks about a potentially troubling advance in artificial intelligence or robotics, the pat response is, “Sarah Connor unavailable for comment.” Connor, of course, was the heroine of the Terminator movies and has more reasons that almost any other cinematic character to distrust self-aware machinery.
I think of that joke often when I see the proliferation of stories about AI and robotics. Boston Dynamics recently put out some video of a new robot on wheels that can roll around upright and even jump over things, which, if you’ve ever been phobic about robots, will surely give you nightmares. Sarah Connor unavailable for comment.
Likewise, I wrote earlier this year about how, by 2020 we will have computers with enough raw processing power to equal that of the human brain and, by 2050, we’ll have computers with enough raw processing power to equal the sum of every human brain on the planet. Sarah Connor most definitely unavailable for comment.
But despite such wise-cracking, there is something very real and very important to the growing use of automation and artificial intelligence within compliance, especially in light of the growing need for measuring effectiveness in compliance programs, and demonstrating a proper return on investment. A significant portion of compliance, especially in the financial services realm, is simply plowing through a large quantity of transactions and communications, and trying to determine if each and every one of them is compliant with every regulation and internal policy that applies. It can be an enormously work-intensive process unless you automate it. Only now is that automation finally bearing fruit. And as it does, compliance professionals are right to wonder what kind of future they might have in a field that is increasingly being given over to thinking machines.
Mallinath Sengupta is CEO of NextAngles, a regulatory software firm that focuses on AML, KYC, and other compliance needs. The way Sengupta puts it, the problem with automation is also in its strength. Computers are very powerful, but very dumb. Without the contextual information that humans take for granted in what we consider to be “common knowledge,” no software system can really fully replace human overseers when it comes to compliance. Sure, you can have a system that flags transactions over a certain threshold, but it takes a human to know when those flags make sense—say, a cash-intensive business like a gas station … or when a gas station transacts way more money than is usual for an operation of that type—and when they don’t. Case in point: A gas station might regularly log cash receipts of more than $30,000 a day, so the trick is seeing when it suddenly logs more than $100,000 in a day and knowing enough to see that for the red flag that it is. Simply filing everything above a single benchmark isn’t good enough.
That is where somebody like Tara Raafat, NextAngle’s chief ontologist comes in. Ontology is the science of how concepts connect to each other to form a greater understanding, and that is what Raafat does with data sets and Turing-based machine learning. She designs how machines can draw from existing pools of data to better contextualize certain conclusions being made by particular alerts, which minimizes the need for human analysis. It is a slow process and, at the moment, we can only really enjoy machines eliminating some of the most basic early branches of a fully contextualized decision tree. But it’s harnessing the potential of very powerful, very dumb machines to make compliance officers deal with a lot fewer red flags that turn out to be nothing.
Alex Baydin, CEO, of PerformLine—a provider of marketing compliance software—points out that marketing is another area that needs both automation, and the ability to separate wheat from chaff, data-wise. Consumer financial services companies, for instance, need to know every instance where their brand appears on the web, even when the brand managers might not be aware of it. Marketing compliance automation employs crawlers that look for every instance of where a brand appears, understands the content surrounding that brand on a landing page or blog, and runs it against the brand owner’s own rules engine to make sure everything is in compliance. This is especially helpful for businesses such as mortgage lenders that have to comply with regulations over truth in lending as well as unfair, deceptive, or abusive acts and practices. Does the brand appear next to content that could be seen as deceptive? Human eyes will never catch everything.
The idea here—and this is something that Sengupta, Raafat and Baydin all stressed when I spoke with them—is not to eliminate humans from compliance entirely, but to make sure that the humans who are in compliance can focus on more strategic risk analysis, and get out of the rote fact-checking business. So far, that has been a very human-intensive endeavor that frankly, isn’t very sustainable and isn’t very cost-effective. Nobody really wants their compliance efforts to ultimately rely on the accuracy of a huge number of people stuck doing a lot of relatively low-brainpower tasks over and over and over every day. Especially not in a Yates Memo era when somebody on cruise control misses a red flag that looks like all the others, and accidentally sets into motion a chain of events that might make headline news.
Big Data and AI aren’t magic or science fiction. It’s not the kind of thing you can just buy off the shelf and suddenly watch as automatons create a seamless compliance regimen. But what it can do is take a lot of the grunt work out of compliance and free up those resources for other things. As compliance strives to become a faster, smarter discipline, making the right use of the technology it has at hand is not just best practices, it’s swiftly becoming common practice.
Sarah Connor might not have a lot to say about that, but that’s alright. She never worked in compliance.