The financial services industry is at the cutting edge of the utilization of artificial intelligence (AI) and machine learning (ML) tools. Regulators want to understand how these technologies are being used—or misused.

Five federal banking regulators—the Federal Reserve, Consumer Financial Protection Bureau, Federal Deposit Insurance Corp., National Credit Union Administration, and Office of the Comptroller of the Currency—issued a request for information (RFI) late last month regarding the use of AI and ML tools by banks and other financial service providers like FinTechs.

The RFI laid out the ways regulators understand the financial industry is using these technologies. The request then asked a series of questions seeking to better understand how the tools work, whether they accomplish their objectives, and whether known or even unknown biases have creeped into these models and negatively affected their output and results.

AI tools are able to react to shifting patterns in real time, providing insight into changes in the financial marketplace, customer behavior, and other forces affecting a financial institution and its customers. ML tools can find more efficient ways to perform tasks that require analyzing huge reams of data and also find insights in data that may have previously gone undetected.

“I think the message is loud and clear: The use of AI and machine learning in the financial sector is inevitable, and we collectively have to get our act together.”

Gary Shiffman, CEO, Giant Oak

“Regulators are trying to figure out how to deal with all these new technologies,” says Kieran Beer, chief analyst for the Association of Certified Anti-Money Laundering Specialists (ACAMS), a global anti-financial crime membership organization. “Part of them figuring it out is maybe not them leading industry as much as learning what industry is doing.”

Beer says regulators indicate to ACAMS members they are concerned about where technology could go wrong and stress the importance of transparency.

“They want to understand the systems and how they work,” he says. “They want to understand what kinds of biases might be missed and how [the biases] might skew the results.”

Perhaps only one other use of AI tools in particular—by law enforcement for surveillance and in criminal investigations—has regulators and government officials more concerned about the potential for misuse and abuse.

The European Commission on Wednesday proposed legislation that would rein in the use of facial recognition by police and place limits on the use of AI in certain “high-risk” cases that include critical infrastructure, college admissions, and loan applications.

Such legislation has not, as of yet, gained much traction in the United States. However, if progressive legislators who view AI as a threat to civil liberties and privacy can somehow find common ground with conservative legislators looking to break up Big Tech companies, launching legislation that would place limitations on the use of AI tools might find fertile ground.

What regulators want to know

In the RFI, federal banking regulators indicated they believe financial institutions are already using AI and ML tools to:

  • Flag unusual transactions to detect fraud and money laundering;
  • Personalize customer services, which on the low end means automating routine customer interactions, while on the high end can mean using algorithms to help tailor a financial institution’s products and services to individual customers;
  • Enhance existing methods for conducting credit checks;
  • Verify and enhance traditional methods for credit monitoring, payment collections, loan restructuring, recovery, and more;
  • Analyze unstructured data to obtain insight from large volumes of text; and
  • Enhance cyber-security, using the tools to detect threats, provide real-time investigation of potential attacks, and support threat mitigation.

The RFI was followed nearly two weeks later by another request from federal banking regulators focused on financial institutions’ risk management practices with regard to complying with the Bank Secrecy Act (BSA).

“Regardless of how a BSA/AML system is characterized, sound risk management is important, and banks may use the principles discussed in the [model risk management guidance] to establish, implement, and maintain their risk management framework,” the second request said.

The two requests are related, in that one addresses how the tools are used and how they work, while the other asks whether they comply with the BSA.

“I think the message is loud and clear: The use of AI and machine learning in the financial sector is inevitable, and we collectively have to get our act together,” says Gary Shiffman, CEO of Giant Oak, a company that applies AI and ML tools to the regulatory and security needs of business and government.

In the first RFI on AI and ML, regulators expressed concern about three risk areas they are worried about with the use of such in financial services.

First is explainability. Regulators will not accept explanations on how a technology works from a vendor. They want to hear it from the financial institution itself and be assured the institution knows not only what the tool does but how it arrived at its result.

“If the output doesn’t make sense, don’t use it,” Shiffman says. “It has to pass the common-sense test.”

Second, the results of any data analysis need to be measured empirically, with constant human input. Regulators are concerned AI and ML tools may not respond appropriately when the data sets they are analyzing undergo a rapid change, or that pre-existing biases in the financial industry may be baked into the algorithms being used to analyze the data, which would taint the results.

AI tools in particular have the potential to identify fraud that was not apparent before by finding new patterns and making new connections, Beer says. As beneficial as the tools may be, Beer says, “There has to be a human involved in the process all along the way.”

The third concern is that AI tools have the capability to update their algorithms on their own in response to changes in the data, sometimes without human interaction, which is known as dynamic updating.

“Dynamic updating techniques can produce changes that range from minor adjustments to existing elements of a model to the introduction of entirely new elements,” the RFI said. Again, the need for human interaction and oversight over the tool is crucial so the results are both understandable and repeatable.

Current status of the ‘feedback loop’

At the moment, the interaction between regulators and the financial industry is very robust when it comes to the use of AI and ML tools to combat classic credit card fraud and consumer authentication (know your customer), says Chris Merz, vice president, security and decision products at Mastercard.

“It’s a very collaborative environment,” Merz says, where the “feedback loops” between regulators and industry “are very established.” The regulators will ask industry leaders in the use of AI and ML tools, ‘Hey, would your system have caught this problem? Would it have recognized this pattern?’ Then financial institutions can run the question through their model, measure how it performs, and report back the results to regulators.

Merz, who leads several teams of data scientists that provide an AI layer to Mastercard’s security and decision products, says in the anti-money laundering (AML) space, the feedback loop is less robust.

For one, money laundering is significantly more difficult for the models to detect because the feedback loop in AML is layered and weak. Combating traditional credit card fraud is about finding anomalies, typically with a known starting point (i.e., a charge that is known to be fraudulent). Fighting money laundering with AI and ML is more about finding patterns of activity that do not match up with other peer customers.

“We’re trying to figure out where payment card activity on, say, one merchant, is behaving differently than activity by other merchants,” Merz says. Money laundering activity typically occurs over a longer period of time than credit card fraud. Since the patterns are less concrete, the results are often less certain.

Despite lacking a strong feedback loop, Mastercard’s acquiring AML solution has flagged scenarios for a major acquirer, including one event tied to human trafficking, Merz shared.

Editor’s note: The comment period on the AI request for information was extended to July 1.