As financial crime grows in scale, speed, and sophistication, banks are increasingly turning to artificial intelligence (AI), machine learning (ML), and generative AI (GenAI) to strengthen anti-money laundering (AML) and surveillance programs.
These technologies promise material improvements in detection effectiveness, alert quality, and operational efficiency. Yet their adoption also introduces new risks, namely model opacity, bias, explainability challenges, and heightened regulatory scrutiny.
Leading banks are therefore shifting from whether to use AI. to how to embed it responsibly. The focus is no longer experimentation alone, but disciplined integration within existing compliance, governance, and risk management frameworks.
From rules-based systems to risk-sensitive intelligence
Traditional AML and surveillance systems rely heavily on static rules and thresholds. While effective for known typologies, they generate high false positives and struggle to adapt to evolving behaviors. AI/ML models, by contrast, learn from historical patterns, detect subtle anomalies, and dynamically adjust to new risks.
Banks are deploying ML models to:
- Prioritize alerts based on predicted risk
- Reduce false positives through pattern recognition
- Identify complex, multi-hop transaction networks
- Enhance trade and market abuse surveillance by detecting behavioral deviations
Importantly, most institutions are not replacing rules wholesale. Instead, they are layering AI models alongside existing controls—using ML for alert scoring, segmentation, or enrichment while retaining rules as backstops. This “defense-in-depth” approach balances innovation with regulatory comfort.
About the Author

Arun Maheshwari is a senior risk executive with over 17 years of experience in model risk management, quantitative analytics, and financial crime compliance across global banking institutions. He leads the model risk control function for legal and compliance at a Tier 1 Global U.S. Bank, overseeing the development, validation, and governance of models spanning anti-money laundering, sanctions, trade surveillance, and customer risk rating, among other models.
The emergence of GenAI in financial crime compliance
GenAI is now expanding the frontier beyond detection into investigator productivity and decision support. Unlike predictive ML models, GenAI focuses on language, reasoning, and synthesis.
Responsible use cases gaining traction include:
- Alert summarization: Generating concise narratives from complex transactional histories
- Investigation assistance: Drafting suspicious activity report (SAR) narratives or case notes (with human review)
- Policy and procedure navigation: Enabling investigators to query internal guidance in natural language
- Training and quality assurance: Simulating scenarios and explaining typologies
Crucially, banks are avoiding unsupervised decision-making by GenAI in high-risk areas. GenAI is positioned as a copilot. not an adjudicator. It should be supporting humans, rather than replacing judgment.
Governance as the foundation of responsible adoption
Regulators globally have made clear that AI models are subject to the same expectations as traditional models and often more. Responsible embedding starts with strong governance.
Key practices include:
- Comprehensive model inventories: Capturing AI/ML and GenAI models across AML, sanctions, fraud, and surveillance, including third-party and vendor tools
- Clear model definitions: Distinguishing between rules, statistical models, ML models, and GenAI systems to avoid governance gaps
- Ownership and accountability: Assigning business, model, and data owners with clear escalation paths
Many banks are extending existing model risk management (MRM) frameworks to explicitly cover AI and GenAI, rather than creating parallel structures. This ensures consistency in validation, approvals, and ongoing oversight.
Explainability, transparency, and regulatory readiness
Explainability remains one of the most critical challenges in AI adoption. Regulators and internal stakeholders expect banks to articulate why an alert fired or how a model influenced a decision.
To address this, banks are:
- Favoring interpretable models where possible (e.g., gradient boosting with explainability tools)
- Using post-hoc explainability techniques such as feature attribution
- Documenting model logic, limitations, and intended use in clear, non-technical language
- Training investigators and compliance officers to understand and challenge model outputs
For GenAI, transparency extends to prompt design, guardrails, data sources, and hallucination controls. Auditability—being able to reproduce outputs and decisions—is becoming a core requirement.
Managing data, bias, and ethical risk
AI models are only as good as the data that feeds them. Historical AML data may reflect past biases, inconsistent investigator decisions, or outdated typologies.
Responsible banks are implementing:
- Data quality controls and lineage documentation
- Bias testing across customer segments, geographies, and products
- Periodic outcome analysis to ensure models do not disproportionately impact protected or vulnerable groups
- Human-in-the-loop controls for high-risk decisions
Ethical AI principles—fairness, accountability, and proportionality—are increasingly embedded into compliance risk assessments and model approvals.
Continuous monitoring and lifecycle management
AI models are not “set and forget.” Drift in customer behavior, products, or typologies can degrade performance quickly.
Best-in-class programs include:
- Ongoing performance monitoring and alert quality metrics
- Thresholds and triggers for model recalibration or retraining
- Independent model validation tailored to AI/ML complexity
- Change management and version control, especially for GenAI prompts and configurations
Some banks are also establishing AI risk committees or centers of excellence to coordinate across compliance, technology, legal, and risk functions.
The road ahead: Innovation with accountability
AI, ML, and GenAI are no longer experimental in AML and surveillance—they are becoming core capabilities. But regulators will judge success not by sophistication alone, but by control, transparency, and outcomes.
Banks that embed these technologies responsibly—anchored in strong governance, explainability, and human oversight—will be best positioned to reduce financial crime risk while meeting rising supervisory expectations. Those that move too fast without discipline risk eroding trust with regulators, customers, and the public.
In financial crime compliance, responsible AI is not a constraint on innovation. It is the only sustainable path forward.



No comments yet