The cutting-edge question asked by Federal Reserve Governor Lael Brainard: “What Are We Learning about Artificial Intelligence in Financial Services?”

Brainard made that query during a speech last week at a FinTech conference in Philadelphia.

“Although it is still early days, the application of AI in financial services is potentially quite important and merits our attention,” she said. “We are working across the Federal Reserve System to take a deliberate approach to understanding the potential implications of AI for financial services.”

Brainard focused her remarks on the branch of artificial intelligence known as machine learning, “the basis of many recent advances and commercial applications.” Modern machine learning applies and refines, or “trains,” a series of algorithms on a large data set by optimizing iteratively as it learns in order to identify patterns and make predictions for new data.

“Due to an early commitment to open-source principles, AI algorithms from some of the largest companies are available to even nascent start-ups,” she explained. “As for processing power, continuing innovation by public cloud providers means that with only a laptop and a credit card, it is possible to tap into some of the world's most powerful computing systems by paying only for usage time, without having to build out substantial hardware infrastructure. Vendors have made it easy to use these tools for even small businesses and non-technology firms, including in the financial sector.”

With emerging technology comes data for it to crunch. Whereas in 2013 it was estimated that 90 percent of the world's data had been created in the prior two years, by 2016, IBM estimated that 90 percent of global data had been created in the prior year alone.

“The pace and ubiquity of AI innovation have surprised even experts,” Brainard said. “The best AI result on a popular image recognition challenge improved from a 26 percent error rate to 3.5 percent in just four years. That is lower than the human error rate of 5 percent. In one study, a combination AI-human approach brought the error rate down even further, to 0.5 percent.”

As the technology rapidly evolves and improves, it is no surprise that many financial services firms “are devoting so much money, attention, and time to developing and using AI approaches,” Brainard said. There is particular interest in at least five AI-powered capabilities:

  • Having superior ability for pattern recognition, such as identifying relationships among variables that are not revealed by traditional modeling;
  • Cost efficiencies where AI approaches may be able to arrive at outcomes more cheaply with no reduction in performance;
  • Greater accuracy in processing compared to approaches that have more human input and higher degrees of operator error;
  • Better predictive power compared to more traditional approaches; and
  • Improvements in accommodating very large and less-structured data sets and processing that data more efficiently and effectively.

Brainard rhetorically asked: “What do those capabilities mean in terms of how we bank?”

The Financial Stability Board has highlighted areas where AI could affect banking, she explained. Customer-facing uses could combine expanded consumer data sets with new algorithms to assess credit quality or price insurance policies. In another example, chatbots could provide help and even financial advice to consumers, saving them the waiting time to speak with a live operator.

There is also the potential for strengthening back-office operations with advanced models for capital optimization, model risk management, stress testing, and market impact analysis. AI approaches could similarly be applied to trading and investment strategies, from identifying new signals on price movements to using past trading behavior to anticipate a client's next order.

There are also likely to be AI-based advancements in compliance and risk mitigation by banks, Brainard said. These solutions are already being used by some firms in areas like fraud detection, capital optimization, and portfolio management.

Current regulatory and supervisory approaches

“The potential breadth and power of these new AI applications inevitably raise questions about potential risks to bank safety and soundness, consumer protection, or the financial system,” Brainard said. “The question, then, is how should we approach regulation and supervision? It is incumbent on regulators to review the potential consequences of AI, including the possible risks, and take a balanced view about its use by supervised firms.”

Regulation and supervision, she explained, need to be thoughtfully designed to ensure risks are appropriately mitigated, “but do not stand in the way of responsible innovations that might expand access and convenience for consumers and small businesses, or bring greater efficiency, risk detection, and accuracy.”

Likewise, it is important “not to drive responsible innovation away from supervised institutions and toward less regulated and more opaque spaces in the financial system.”

In Brainard’s view, “existing regulatory and supervisory guardrails are a good place to start” as prudential regulators assess appropriate approaches.

The National Science and Technology Council, in an extensive study addressing regulatory activity generally, concluded that if an AI-related risk “falls within the bounds of an existing regulatory regime, the policy discussion should start by considering whether the existing regulations already adequately address the risk, or whether they need to be adapted to the addition of AI.” A recent report by the Treasury Department reached a similar conclusion regarding financial services.

With respect to banking services, a few generally applicable laws, regulations, guidance, and supervisory approaches are already relevant to the use of AI tools. The Federal Reserve's “Guidance on Model Risk Management” underscores “effective challenge” of models by a “second set of eyes”—unbiased, qualified individuals separated from the model's development, implementation, and use. Supervisory expectations for sound independent review of a firm's own models are called for to confirm they are fit for purpose and functioning as intended.

“When our own examiners evaluate model risk, they generally begin with an evaluation of the processes firms have for developing and reviewing models, as well as the response to any shortcomings in a model or the ability to review it,” Brainard said.

Importantly, the guidance also “recognizes that not all aspects of a model may be fully transparent,” she added. “Banks can use such models, but the guidance highlights the importance of using other tools to mitigate the risk of an unexplained or opaque model.” Risks may be offset by mitigating external controls like “circuit-breakers” or other mechanisms.

Guidance on vendor risk management, along with guidance on technology service providers, highlights considerations firms should weigh when outsourcing business functions or activities. These can be expected to apply as well to AI-based tools or services that are externally sourced.

“The vast majority of the banks that we supervise will have to rely on the expertise, data, and off-the-shelf AI tools of non-bank vendors to take advantage of AI-powered processes,” Brainard said. “Whether these tools are chatbots, anti-money-laundering/know your customer compliance products, or new credit evaluation tools, it seems likely that they would be classified as services to the bank.”

The vendor risk management guidance she referenced discusses best practices for supervised firms regarding due diligence, selection, and contracting processes in selecting an outside vendor. It also describes ways that firms can provide oversight and monitoring throughout the relationship with the vendor and considerations about business continuity and contingencies for a firm to consider before the termination of any such relationship.

Brainard stressed a risk-focused supervisory approach: “the level of scrutiny should be commensurate with the potential risk posed by the approach, tool, model, or process used.”

“Firms should apply more care and caution to a tool they use for major decisions or that could have a material impact on consumers, compliance, or safety and soundness,” she added.

An AI tool should be subject to appropriate controls, as with any other tool or process, including how it is used in practice and not just how it is built. “This is especially true for any new application that has not been fully tested in a variety of conditions,” Brainard said.

The Fed and its fellow regulators will expect firms “to apply robust analysis and prudent risk management and controls to AI tools,” as they do in other areas. For example, in the areas of fraud prevention and cyber-security, supervised institutions may need their own AI tools to identify and combat outside AI-powered threats.

The wide availability of AI's building blocks means that phishers and fraudsters have access to best-in-class technologies to build AI tools that are powerful and adaptable. Banks will likely need tools that are just as powerful and adaptable as the threats that they are designed to face, which likely entails some degree of opacity.

“In cases where large data sets and AI tools may be used for malevolent purposes, it may be that AI is the best tool to fight AI,” Brainard said.

As for the proverbial “black box,” the potential lack of explainability associated with some AI approaches, she explained that in the banking sector it is not uncommon for there to be questions as to what level of understanding a bank should have of its vendors' models, due to the balancing of risk management, on the one hand, and protection of proprietary information, on the other.

“AI can introduce additional complexity because many tools and models develop analysis, arrive at conclusions, or recommend decisions that may be hard to explain,” Brainard explained. “Depending on what algorithms are used, it is possible that no one, including the algorithm's creators, can easily explain why the model generated the results that it did.” The challenge of explainability can translate into a higher level of uncertainty about the suitability of an AI approach.

So how does, or even can, a firm assess the use of an approach it might not fully understand?

“To a large degree, this will depend on the capacity in which AI is used and the risks presented,” Brainard said. One area where the risks may be particularly acute is consumer lending “where transparency is integral to avoiding discrimination and other unfair outcomes, as well as meeting disclosure obligations.”

AI may offer new consumer benefits, she added, but it is not immune from compliance with fair lending and other consumer protection laws.

“It should not be assumed that AI approaches are free of bias simply because they are automated and rely less on direct human intervention,” Brainard said. “Algorithms and models reflect the goals and perspectives of those who develop them as well as the data that trains them and, as a result, AI tools can reflect or ‘learn’ the biases of the society in which they were created.”

The Equal Credit Opportunity Act and the Fair Credit Reporting Act include requirements for creditors to provide notice of the factors involved in taking actions that are adverse or unfavorable for the consumer. Compliance with these requirements, however, may require finding a way to explain AI decisions.

Fortunately, Brainard said, the tech community is responding with important advances in developing “explainable” AI tools with a focus on expanding consumer access to credit.

“Perhaps one of the most important early lessons is that not all potential consequences are knowable now,” she concluded. “Firms should be continually vigilant for possible future problems.”