The U.K. financial regulator is expanding its remit and planning to deploy AI to manage its increased workload. This is part of a global trend and has significant compliance implications.

The Financial Conduct Authority’s (FCA’s) 2026-27 work program, published on March 27, included two important compliance takeaways. First, the regulator is becoming the only anti-money-laundering (AML) supervisor for professional services, bringing around 60,000 more firms under its auspices.

Second, to manage this increased workload and achieve aims including innovation, stronger standards, and better customer outcomes, it intends to make greater use of AI. This may affect how companies interact with the regulator and introduce new compliance risks.

The FCA plans to integrate AI into regulatory workflows to enable it to detect harm more effectively and speed up regulatory decision-making. It will also use generative AI to review documents received from firms to support faster decisions.

Nikhil Rathi, chief executive of the FCA, explained: “This year’s program builds on our ongoing drive towards smarter, more data-driven regulation, helping us identify risks sooner, make faster, more consistent decisions and reduce unnecessary burdens on firms.”

Global trends

Adam Gilbert, global senior regulatory advisor in financial services risk and regulatory at PwC, said compliance teams worldwide must adapt to AI supervision.

“The FCA’s integration of AI in its 2026-27 strategy is a clear signal of where supervision is heading globally,” he warned.

He pointed to two key compliance issues. First, while the FCA has been transparent about its use of AI, some U.S. regulators, including the U.S. Securities and Exchange Commission (SEC), are already using AI internally for document review, analysis, and supervisory efficiency, although they have not all stated this publicly.

Others, like the Federal Reserve, are exploring “how AI can enhance supervision – improving examiner training and helping analyze large volumes of public data – while emphasizing that expert judgment remains central to decision-making,” he said.

“In that sense, the FCA’s plans are an early and more visible example of a broader shift toward transparent, data-driven, and technology-enabled supervision. That shift will require firms to enhance their compliance capabilities, particularly in areas such as data quality, governance, and their own internal use of AI,” Gilbert explained.

The second issue for compliance is that this is a step-change in regulatory capability, particularly in terms of speed, consistency, and continuity. Gilbert said the FCA’s use of AI to review firm submissions, combined with the integration of automated data feeds into supervision, will make oversight faster, more consistent, and more data-driven.

“In practice, inconsistencies across policies, submissions, and controls that may previously have gone unnoticed are more likely to be identified quickly,” he said. “Automated data feeds will also make it harder for firms to control the flow of information to regulators, compared with a model where specific information is formally submitted.”

To meet these evolving demands, Gilbert advised compliance teams to:

  • Expect greater scrutiny of consistency. AI allows regulators to compare policies, filings, and controls across the enterprise and over time, increasing the likelihood that gaps or contradictions are identified.
  • Use AI internally for quality assurance. Applying similar tools to review submissions, test alignment across governance documents, and identify unsupported statements can help firms address issues before regulators do.
  • Strengthen governance and documentation. As regulators gain the ability to analyze larger and less structured datasets, including transaction data, customer interactions, and elements of code, firms will need clearer documentation of assumptions, model changes, and control frameworks.
  • Prepare for more continuous supervision. The move toward automated data feeds suggests supervision will become less episodic and more ongoing, reducing the ability to curate regulatory interactions.

“The FCA’s strategy signals a move toward a more proactive, technology-enabled supervisory model that leaves less room for inconsistency or delay,” he summarized. “Firms that invest now in data quality, governance, and AI-enabled compliance will be better positioned to keep pace as this model becomes the global standard.”

Underlying expectations

Ted Datta, head of the financial crime and compliance practice for Europe and Africa at Moody’s, agreed AI is changing the speed, scale, and persistence of regulatory oversight  – but not the underlying expectations.

“Regulators using AI can review far more information simultaneously and seek to identify potential risks earlier, but firms remain fully accountable for the quality and defensibility of their decisions,” he warned.

Datta pointed out that 79 percent of risk and compliance professionals believe new regulations governing the use of AI in compliance are important. “More than half of organizations say they are already using or trialing AI in risk and compliance, up from roughly 30 percent two years earlier, meaning supervisors are increasingly overseeing AI‑assisted activity in live environments rather than pilots,” he explained.

However, he warned that one of the biggest constraints on effective AI adoption is trust in outputs. Compliance teams consistently point to challenges around data quality, explainability, and governance.

“To address this, firms need to shift focus from AI tools alone to the context and intelligence layer that sits underneath them,” he advised. “In regulated environments, AI outputs must be explainable, auditable, and defensible. While 84 percent of risk and compliance professionals agree AI offers significant advantages, many have only seen a moderate impact so far. Governance, safeguards, and operating model readiness matter as much as the technology itself.”

He advised that “intelligence that is decision‑grade – curated, structured and fully traceable” is critical now that AI systems are embedded in risk and compliance workflows.

Opacity is a compliance risk. “As AI supports more regulatory activity, firms become vulnerable if they cannot explain or evidence AI‑assisted decisions under scrutiny.”

Compliance teams can mitigate this by embedding governance, clear sourcing, lineage, and audit trails into AI‑enabled workflows, with responsibility remaining firmly with people rather than systems, he said. Purpose‑built AI tools being developed can produce trusted, auditable outputs at the scale and speed regulated institutions demand.

Continuous scrutiny

Compliance teams should expect regulators’ use of AI to lead to more continuous, data-led scrutiny, instead of periodic reviews, warned Scott Bridgen, general manager for risk and audit at Diligent.

“For firms, that raises the bar on their own systems and controls,” he said. “You need to be confident that the data you submit is accurate, timely, and well‑governed.”

They must be able to demonstrate how processes are controlled, the checks and reconciliations in place, how AI tools are monitored, and who is accountable. “Without that, responding to regulatory queries is going to get much harder,” he said.

This will be a challenge for organizations still dealing with fragmented data and limited visibility, especially where AI implementation and usage sits with third-party providers.

“As AI-enabled adversaries have increased attacks by 89 percent year-on-year, the risks tied to poorly governed systems and external dependencies are growing. That lack of oversight becomes more visible when regulators are using AI to identify patterns or anomalies at scale,” he said.

“Explainability is critical,” he added. If a regulator flags an issue based on AI analysis, firms must respond with evidence showing how a decision was made, the data and models used, and the level of human oversight.

“There is a real danger in over‑reliance on automated signals – on both sides – if neither party fully understands how those outputs are reached,” he warned.

Brigden advised that firms that invest in data quality, strong governance, and clear accountability for AI (including model inventories, testing for drift and bias, and robust third‑party oversight) will be better able to meet regulators’ demands, because they will have the same capabilities to spot issues.

“They will be able to demonstrate that their control environment can stand up to that level of scrutiny,” he said.

Ruth Prickett graduated from Cambridge University with a BA hons in History and has specialized in business and finance journalism for the past 20 years. She was editor of Financial Management, the magazine...