Compliance officers across the economy are being told to adopt AI - to save costs, increase and expand compliance coverage through more real-time and contextualized information, navigate the ever-changing regulatory landscape, and leverage more analytical firepower to make better decisions.
At Compliance Week’s AI and data analytics conference, The Leading Edge, attendees said that these don’t happen in a vacuum, and that friction, misaligned expectations, “shadow AI” proliferation, top-down mandates without resources and failed pilots are holding teams back.
These challenges don’t just slow down AI adoption, they expose companies to compliance risks, increase employee distrust and burnout, and prevent ROI when AI investments are wasted. Almost everyone in the audience indicated that they were using AI, but almost no one agreed that they were using it strategically. The challenge is to architect a trustworthy, scalable and defensible system of governance that relies on culture change and buy-in, not just technical implementation.
About the Author

Jen Gennai is Partner & Head of Responsible AI at T3, a consultancy specializing in guiding responsible AI implementation. In a previous role at Google, she founded its Responsible Innovation team, which was tasked with integrating ethical considerations into AI development. Her team worked with product and engineering teams, leveraging expertise in ethics, human rights, user research, racial justice, and gender equity to validate that Google’s AI products align with commitments to fairness, privacy, safety, and societal benefit.
Regardless of function, what we see across our T3 clients is that most companies under-estimate the most important step: Taking the time to ask, “What exactly do we want AI to do?” Without a clear methodology, AI usage can be misaligned with company goals, ineffective in reducing workloads, and hide organizational and regulatory risks and vulnerabilities.
To make more informed decisions about AI, compliance leaders must adopt a more structured approach to identifying what AI is intended to do, and how to roll out effectively and responsibly. T3’s F.I.P.A maturity assessment (Foundation, Implementation, Productionization, Assurance) can help compliance teams strategically determine what to use AI for, and how to integrate it into their organizational DNA.
Compliance leaders’ strategic blindspot
Let’s start by dispelling an AI myth: “AI can be adopted anywhere, for anything.”
AI is a very powerful technology, but it is not a silver bullet for all issues, comes with real risks and must be deployed to solve real use cases, with human oversight and diligence to ensure it’s working as intended. When compliance teams skip the strategic alignment phase, they invite the proliferation of untracked, and therefore unmanaged, “shadow AI”, as well as wasting time and money on efforts that may not address real problems, nor be directly tied to company priorities.
Sign up for a CW webcast with Jen Gennai: Separating AI Fact from Fiction
March 31 | Separating AI Fact from Fiction | Webcast | Compliance Week
To figure out what to use AI for, leaders need to shift their mindset from technology-centric to use case-centric. Organizations must start with the problem, not the technology because genuine ROI comes from solving real pain points.
To determine where AI belongs in your organization, start by calculating the true cost of your manual work and identifying automation, or elimination, potential. Compliance leaders we’ve talked to who use AI felt that AI had not reduced their workload as they continued to be busy.
However, they had not accurately documented where they spent their time, so didn’t always appreciate that AI had saved time in some areas, and allowed them to pivot into other areas of work which they hadn’t had time for before. Within the risk and compliance sector, organizations are already seeing significant value when they track work done and time spent before and after adoption diligently, specifically in Anti-Money Laundering (AML) and Know your Customer (KYC) analyses, where AI expands the contextual information available to be analyzed, increases speed and scope of screening, and reduces false positives.
How to roll out AI: The 4-step FIPA methodology
Without sacrificing trust, compliance, market-share or long-term sustainability, compliance leaders must quickly and responsibly adopt AI. The “FIPA” maturity model breaks AI adoption into 4 distinct, sequential steps: Foundation, Implementation, Productionization, and Assurance, for clear, actionable guidance on how to embed and manage AI from day one.
Step 1: Foundation (Impact & Alignment)
Compliance leaders at the Foundation stage often face top-down mandates to use AI, face budget pressures and suffer skill gaps, either with themselves or their team members. They also lack clarity on where and how to use AI. The primary action is to define and prioritize high-value use cases that fit with budget and the team’s capabilities, and are aligned to the team’s or organization’s existing goals.
Teams should start by mapping the time spent on various tasks and identifying the areas of biggest impact, by calculating the cost of manual work and comparing against the potential time savings, effort, and value AI could provide. Start with goals and impact–not that you need to use AI, now figure it out.
- Key Business Action: Align your AI usage directly with business goals using clearly defined Key Performance Indicators (KPIs)
- Key AI Responsibility Recommendation: Ensure AI usage is tracked and an AI governance strategy defined.
- Best Practice: Pilot small. Measure effort and impact diligently and frequently
Step 2: Implementation (Integration & Measurement)
Once a tool is deployed, organizations move into the Implementation stage. Here, compliance leaders often face the challenges of having rolled out tools and conducted basic training, but they’re seeing limited adoption by team members and lower-than-expected ROI. The goal is to translate investments into measurable impacts.
A common pitfall at this stage is treating AI as an “add-on.” Instead, AI must be deeply integrated into end-to-end workflows, enabling organic and sustained adoption. This requires moving past generic technology tutorials and instead retraining staff with a focus on role-specific AI use cases.
This point is reinforced in the EU AI Act’s AI Literacy requirements, which obligate companies to ensure that companies are providing sufficient role-relevant AI capability, limits and risk education to their employees. At the same time, training is insufficient alone if the tools are not suitable for, or too hard to use, or other structural factors affect employee usage, so listening to staff to understand where their challenges and pain points with AI adoption lie, help mitigate against the mismatch between AI investment and employee adoption.
- Key Business Action: Solicit structured feedback to identify friction points, missing features, and adoption blockers.
- Key AI Responsibility Recommendation: Communicate AI policy expectations to all employees to drive effective, compliant AI usage.
- Best Practice: Conduct rigorous pre- and post-launch testing to measure accuracy and tangible business metrics.
Step 3: Productionization (Scaling & Governance)
As AI usage scales across multiple departments, compliance teams hit the Productionization stage. Here, leaders struggle with a culture clash between rapid innovation and risk management. The core challenge is creating organization-wide AI accountability without killing the speed of innovation, or missing organizational goals around AI usage and workforce improvements.
The critical decision in this phase is whether to implement localized risk management or adopt a shared risk infrastructure. Localized management allows for faster initial launches and minimal governance overhead, but it creates inconsistent risk postures, duplicated efforts, and fragile governance that is hard to audit centrally. Alternatively, building a shared risk infrastructure and centralized approach, requires higher upfront investment and change management, but it ensures consistent risk postures, reduces long-term costs through reusable controls, and makes the enterprise audit-ready by default.
- Key Business Action: Conduct a comprehensive FIPA gap analysis and identify where it makes sense to standardize and centralize AI governance within the risk & compliance team for example, versus where responsibilities are better suited to sit within functions.
- Key AI Responsibility Recommendation: Centralize and integrate AI risk and harm mitigations and controls to ensure all teams benefit from shared learnings, and can address emerging issues across the business.
- Best Practice: Define and implement an AI risk review process with clear, calibrated escalation paths. Integrate where possible into your existing risk management processes. There are no prizes for reinventing the wheel here, and frameworks such as the NIST AI Risk Management Framework (RMF) were informed by a large number of experts (this author included) to ensure they are practical and flexible enough to be adopted by any organization.
Step 4: Assurance (Continuous Optimization)
AI is not a static software deployment; it is a dynamic, learning system meaning the AI governance lifecycle requires constant maintenance and oversight.
- Key Business Action: Implement comprehensive scorecards (including your defined KPIs) and conduct sector benchmarking.
- Key AI Responsibility Recommendation: Build robust systems for continuous monitoring and anomaly detection.
- Best Practice: To ensure long-term viability, automate AI governance and risk management processes wherever possible, and seek external assurance to validate the integrity of your systems.
The bottom line for compliance leaders
Ultimately, deploying AI effectively is not an IT challenge; it is a human challenge. Compliance leaders must invest heavily in aligning incentives, driving cultural transformation, and empowering their teams to understand, and embrace, how these tools augment their daily work. A brilliant AI roadmap will fail if the humans operating it do not trust it. Focus on the people, anchor your use cases on real problems, test before- and after-launch, and build your AI ecosystem on defensible, auditable governance.



No comments yet