A recent roundtable on managing resources while confronting regulatory change, sponsored by Wolters Kluwer and organized by the International Compliance Association (ICA), looked at the importance of balancing machine learning (ML) and artificial intelligence (AI) with human intelligence and intervention.

ICA

The International Compliance Association (ICA) is a professional membership and awarding body. ICA is the leading global provider of professional, certificated qualifications in anti-money laundering; governance, risk, and compliance; and financial crime prevention. ICA members are recognized globally for their commitment to best compliance practice and an enhanced professional reputation. To find out more, visit the ICA website.

In response to a poll on AI use in compliance management, 44 percent of delegates voted they felt their companies underutilized AI. Another 37 percent said AI wasn’t used at all (to their knowledge). Only 15 percent felt there was the right combination of human and artificial intelligence, and the remaining 4 percent believed AI was overutilized.

This result led into the roundtable’s main discussion topic: Is there a “right” balance between AI and human intelligence within an organization? If so, what is it and how and why might it change?

Responses to these questions varied. Importantly, it became quickly clear there was uncertainty about the definition of AI itself. The concept often gets muddied by other adjacent (but often seen as interchangeable) phrases, such as automation and machine learning. The definition ICA moderator Jonathan Bowdler gave was “using computers to mimic the human brain,” which largely seemed to be outside of the participants’ direct experience.

Instead, delegates viewed AI in more simple terms: Technology that could be given set parameters, often with binary “pass” or “fail” criteria, to sift through usually huge quantities of data and, for example, pull up anything that indicated money laundering risk or people who might be politically exposed persons.

One attendee mentioned their company had a more sophisticated system using voice analytics, which would monitor an ongoing call and raise a prompt to the employee if it recognized he or she had not mentioned something that was needed to remain compliant. This (practice), they reported, was a useful prevention tool, as instead of the recorded call being checked after and feedback being given following a breach, the technology would catch and raise the issue in call so the staff member could avoid the risk.

Many participants reported, at least with their experiences of AI at its current level, an ongoing need for the results from any AI or ML processes to be reviewed manually by a human—whether to check for mistakes or to make the final decisions. The general thoughts were the technologies should be used to sort through data and then advise and give recommendations to human users to check, instead of giving the computer systems complete power to approve or reject something.

This response illustrated a consistent feeling, summarized by Bowdler and seconded by delegates, of hesitancy to trust and give over control to these digital solutions.

Coexistence between humans, AI

One delegate mentioned hearing of a company that is constantly having to hire more people to process large data sets on its systems. The firm experiences a high turnover rate, as staff often leaves for more engaging work.

Such a system is unsustainable in the long run. A solution would be installing a trusted and reliable AI program to replace mundane, repetitive tasks and allow staff to focus more on value-add activities.

On the other side of that, of course, are the worries of many in the industry about the stability of their working future, not to mention the other jobs that might fall under the AI-replaceable bracket as technology improves. Again, this is an important area where education and transparency are integral to show commitment to staff members and diffuse their concerns.

One of the consistent themes of the roundtable was many companies are still far away from introducing technology that closely mimics the human intelligence AI was defined as earlier in the session. The current technology delegates use is restricted to more simple parameters, such as tick boxes or keywords being applied to data sets.

There were issues raised with how well this approach works; for instance, the issue of the quality of the company’s data being provided to the system in the first place. If the data is out of date, in an incompatible format, or missing some of the information needed by those parameters, the technology cannot work as effectively, and false results are far more likely.

It became clear in conversation that business models need to change. While companies might look to AI to cut time-consuming activities, expensive new systems applied to bad data or given to untrained and demoralized staff to operate are not going to lead to as many successes as they might have been dreaming about.

Likewise, there is the issue of biases being built into ML and AI systems. If sample data used in training algorithms for a system is flawed or features unconscious bias, the information that comes back out is going reflect those problems.

These accidental biases can be mitigated thorough test and review processes being put in place to identify and counteract such instances. Biases can be further avoided by ensuring diversity in the people designing the algorithms and that their knowledge goes beyond just the working of the technology to encompass the content for which it is being designed.

The approach of regulators was also discussed. In the United Kingdom, for example, the Financial Conduct Authority is taking a more progressive stance than many of its peers, stating it wants “consumers to benefit from digital innovation and competition. This includes data-based and algorithmic innovation.”

There is excitement about the idea of using AI and the possibilities it offers. Trust in the technology will be a major hurdle to overcome if we are going to have a healthy balance of AI and human intelligence in the future.

The International Compliance Association is a sister company to Compliance Week. Both organizations are under the umbrella of Wilmington plc.