A recent survey of financial crime professionals at U.S. banks and nonbank financial institutions found that while three of every four companies had more anti-money laundering (AML) employees in 2023 compared to 2022, nearly all respondents said growing their department’s headcount alone won’t keep up with emerging risks.

The key to combating ever-increasing and complex financial crime attacks on their institutions, respondents to Nasdaq’s 2024 Global Financial Crime Report said, is to find and implement technology solutions that employ artificial intelligence (AI) and data analytics tools to detect fraud efficiently and effectively.

The size of the problem, of course, is almost incomprehensibly large. The report found $3.1 trillion in illicit funds flowed through the financial system in 2023, accounting for global fraud-related losses at more than $485 billion.

Financial crime threats of greatest concern were real-time/faster payments fraud (52 percent), money mule activity (47 percent), terrorist financing and drug trafficking (each 33 percent), government benefit fraud (32 percent), elder abuse/exploitation (27 percent), and consumer scams (24 percent), according to survey respondents.

Seventy-one percent of respondents said their organization spent more on AML technology solutions in 2023 compared to 2022, while 70 percent said they expect their organization to increase its spending on AI or machine learning tools in the next one to two years.

“We need a technology solution to the increase in volume,” one industry professional told Nasdaq researchers. “We can’t continue to hire.”

The report was based on responses from 209 AML professionals at bank and nonbank institutions in North America ranging from $10 billion to more than $500 billion in assets, as well as first-person interviews and analysis of industry data.

Most respondents have already deployed technological solutions to conduct fraud monitoring, including intelligent document processing (62 percent), natural language processing (50 percent), robotic process automation (44 percent), and machine learning (40 percent). Only one in four (26 percent) respondents reported deploying generative AI or large language model solutions to find financial crime.

Survey respondents said they would like to see regulators encouraging technology innovation, like AI and data-driven analytics. They’d like more guidance from regulators that is typology specific in areas like human trafficking, domestic terrorism, and other emerging AML threats. And they’d also like to see regulators launch programs that incentivize the effectiveness of AML solutions and outcomes-focused programs.

Respondents also said they would like to see more collaboration and information sharing allowed between private banks. This kind of collaboration is currently permitted through the PATRIOT Act’s safe harbor provision, Section 314(b), but a 2017 audit by the Office of Inspector General found less than one in four eligible banks participated in the program.

The Treasury Department’s Financial Crimes Enforcement Network has also encouraged such collaboration, with limited success.

In addition, respondents said they would like law enforcement to engage in more robust public-to-private collaboration and information sharing, including feedback loops that provide financial institutions insights into whether information they shared in their suspicious activity reports required by the Bank Secrecy Act led to enforcement actions, arrests, or proved helpful in some way.

“Ultimately, we know that no single company, industry, technology, or government is going to solve the complex problem of financial crime alone,” Nasdaq Chair and Chief Executive Adena Friedman said in the report. “There is an opportunity to work together on a framework and align on measures of success for effective anti-financial crime programs. We all have a responsibility—to ourselves and to the world—to be part of the solution.”