The explosion of data that compliance and audit professionals must now make sense of in order to advise and protect their organizations can no longer be handled by human beings in the time required to act on it effectively. That increasingly requires partnering with artificial intelligence technologies that learn more quickly when being fed greater amounts of data.

That was one of the main takeaways from Compliance Week’s recent conference on AI, technology innovation, and compliance, held in New York City on June 27. One of the aims of the one-day event, co-sponsored by Deloitte & Touche and Narrative Science, was to sort the current capabilities and realistic promise around AI from the hype.

Although the expectations of artificial intelligence, also known as machine learning, that people have had over the last 30 years have remained largely unfulfilled, this time the promise could be for real. The technology has changed very little in recent decades, while the quality of the data has always been the problem hindering how effective it can be, said Ares Management Global Chief Compliance & Ethics Officer Anthony Dell during one panel discussion.

Now that the monumental volume of data that organizations routinely collect is overwhelming human beings’ ability to interpret and analyze it quickly enough to be of practical use, there is an imperative to find solutions. And thanks to the growth in computing power, “AI technologies not only have much more data to learn on, but they have the ability to be responsive in the time that is required to be actionable,” said Kim Neuwirth, director of product management at Narrative Science. “That’s why we believe that this time the hype around artificial intelligence is real.”

“Don’t look for the perfect data set. You don’t need the grand solution. Get something that you can experiment with and learn.”
Will Bible, Partner, Deloitte & Touche

But Neuwirth also cautioned against being dazzled by AI’s novelty. Users have to recognize that absent the context of their specific goals for it, AI is nothing but technology, she said. “You need to ask yourself, ‘What problem am I trying to solve [with it]?’”

There is still likely to be some disillusionment with AI in the near term, especially among companies that have invested heavily in Big Data and “now are looking to machine learning and artificial intelligence to justify that expense,” warned Scott Zoldi, chief analytics officer at FICO, which has relied on AI technology to prevent fraud for 25 years. Users need to understand that most AI models aren’t yet all that sophisticated. “Right now, AI ’s not curious. It will learn from what you tell it to learn. So AI can anticipate, but that’s just consequential in terms of how AI learns.”

Throughout the day, speakers outlined what AI is already able to do and how it is likely to change the focus of the compliance and internal audit functions in the not-too-distant future. The computing ability of the technology will transform compliance and audit from look-back functions to real-time functions, which will drive the continued integration of these departments into day-to-day business processes, Dell said. “As accounting people enter things in ledgers, there will be real-time testing and validation” by the internal audit department, he said. Among the three key aspects of compliance, monitoring and surveillance are “pure data plays” and will be the first to be robotized, while data processing is also easily done by machine learning. Dell believes the advisory aspect will be the last to be adapted to machine learning. As these transformations unfold, compliance officers will increasingly be required to train the bots, he added.

Data analytics is now in the defining and ideation stages, learning which types of data are needed and where it can potentially be applied to facilitate compliance and internal audit, said Marc Maramag, director of anti-money laundering compliance at First Republic Bank. Until that has fully matured, he doesn’t expect AI tools to have much impact, but once it has he believes the impact will be significant.

AGILE INTERNAL AUDIT

At the CW AI, technology, and compliance summit, Neil White, principal of Deloitte, outlined the “Have to Haves” and the “Want to Haves” of internal audit.
Have to Haves:
• Outcome-driven mindset aligned to efficiency, cost savings, and value-driven
• Decisions made with regulatory requirements, Internal Audit mission, and their business
partners in mind
• Initial agreement on “have to haves”
• Defining project’s value – balance value preservation (assurance) and value creation
(advisory)
• Identify key stakeholders/business partners (audit committee, executive management,
business unit leaders, field management)
Want to Haves:
• Variability in how you meet requirements
• Frequent and concise communications
• Issue, risk, action, insight tied to “so what”
• Iterative plans and process at every stage (planning, fieldwork, reporting)
• Initial sprint defines remaining sprints
• What is good enough to meet the needs!
Source: Neil White, Principal, Deloitte

Some institutions have begun to use machine learning to create a regulatory database and a matrix that enables them to see where certain regulations would apply to vertical lines of business, Maramag said. This also helps them identify related controls that will mitigate the risk of non-compliance with regulations. That is boosting business leaders’ confidence to pursue expansion in certain products and services while recognizing areas of risk that their organizations need to consider if they venture into particular geographic regions, he said.

One reason for excitement at the promise AI holds for compliance and audit professionals is the expectation that the technology will be able to keep up with fraudsters’ ever-evolving schemes to thwart controls and other anti-crime measures. Adaptive analytics and self-learning analytics are AI models that learn based on real-time behaviors and anomalies in what they are seeing compared to behaviors they were trained on. Years ago, a model was built around historical data under the assumption that future threats would resemble past ones.

“The latest technologies against financial crime are monitoring anomalies in a stream of transactions, understanding what’s different and then making sure the model is reacting to that,” Zoldi said. “And that’s how models in the areas of cyber-security and fraud react to changes where [criminals] are trying to circumvent the AI models that are there to protect those financial streams.”

Some speakers said they were enthusiastic about AI’s ability to help identify unknown unknowns such as improper relationships between employees and others that organizations otherwise would never know existed. Rather than replacing a rules-based approach, machine learning adds another layer of insight to augment existing controls with behavioral analytics, Kal Ghadban, practice director for analytics and data science consulting at CaseWare Analytics, said in a breakout session presentation. “It finds indirect relationships between parties who may not be who they say they are and other actors known as criminals, and [clarifies] those relationships based on the fraudulent transactions, the anomalies [revealed by the data],” he said.

Panelists urged attendees to not delay getting started with new technology, explaining that training AI requires that extensive lead-time be built into projects. That allows for experimentation and gives data scientists time to fully grasp the significance of the data they are capturing. “Don’t look for the perfect data set,” said Will Bible, a partner at Deloitte & Touche who leads its audit innovation initiative. “You don’t need the grand solution. Get something that you can experiment with and learn.”

Joseph Lodato, global head of compliance technology and surveillance at Guggenheim Partners, said his great hope is that AI will drive down the number of false positives that are turned up through compliance officers’ data surveillance programs. That would improve compliance’s ability to focus on legitimate behavioral irregularities that could be red flags for misconduct.

The growing regulatory push for data privacy and permissible rights, particularly in the European Union, is adding complexity to AI considerations because many companies have yet to put data governance processes in place. There are also ethical considerations around AI tools, which, by their very nature, are able to connect disparate bits of data about individuals and create profiles that can have disturbing implications. Citing the counter-weighting measures that FICO has taken to prevent unintended discrimination in credit score calculations based on ethnic or geographic information, Zoldi said, “We can take certain steps in the development of the AI models to make sure those elements are not available for the machine to get access to or to look at an algorithm to make sure it’s not inadvertently discriminating based on what it’s learning in other parts of the data.”

Throughout the day panelists emphasized that AI tools, rather than being able to displace workers in any one department any time soon, will automate the most repetitive and boring portions of lots of jobs, freeing compliance and audit staff to think proactively about combatting crime. Bible expects AI to provide an opportunity to “pivot our people to focus on things like emerging risks so that you’re constantly staying in front of those types of issues as they develop and really getting the value out of those people as opposed to having them perform those routine tasks.”