The email arrives with no warning. The business has selected an AI platform. IT is already integrating it, and a pilot is underway. The Board of Directors is on board and enthusiastic. The Chief Compliance Officer has been asked to “provide governance” within one week.
Where do you begin?
I think the answer is straightforward. You can begin with the U.S. Department of Justice’s (DOJ) 2024 Evaluation of Corporate Compliance Programs (ECCP).
The ECCP makes the following explicit: prosecutors will assess how companies identify, manage, and control risks arising from new and emerging technologies, including artificial intelligence, both in business operations and within compliance programs themselves. But even within this prosecutorial mandate is the information to give you a starting point for this management request.
Reframe AI as a DOJ Risk Assessment Issue
The first step is to stop treating AI as a technical deployment and start treating it as a risk assessment obligation. The ECCP is clear that risk assessments must evolve as internal and external risks change, and it specifically calls out AI as a technology requiring affirmative analysis. Prosecutors will ask whether the company assessed how AI could impact compliance with criminal laws, whether AI risks were integrated into enterprise risk management, and whether controls exist to ensure AI is used only for its intended purposes.
For the CCO, this means formally incorporating AI use cases into the compliance risk assessment. If AI touches investigations, monitoring, training, third-party diligence, or reporting, then it is squarely within DOJ scrutiny.
Inventory Before You Draft Policy
The ECCP does not reward aspirational policies unsupported by facts. Prosecutors want to understand why a company structured its compliance program the way it did. Before drafting AI governance frameworks, compliance must demand a full inventory of AI use:
- What tools are deployed or piloted;
- Which business functions use them;
- What data they ingest; and
- Whether outputs are advisory or decision-shaping.
This inventory should explicitly include employee use of generative AI tools. The ECCP emphasizes the management of insider misuse and unintended consequences of technology. Unmanaged “shadow AI” use is now a compliance failure, not an IT inconvenience.
Focus on Decision Integrity, Not Model Design
One of the ECCP’s most overlooked insights is that DOJ evaluates outcomes and accountability, not technical elegance. When AI is used, prosecutors will ask:
- What decisions did the AI influence;
- What baseline of human judgment existed; and/or
- How accountability was assigned and enforced
Compliance officers should therefore center governance around decisions, not algorithms. If no one can explain how an AI output was evaluated, overridden, or escalated, the company cannot demonstrate that its compliance program works in practice. The ECCP explicitly asks what “baseline of human decision-making” is used to assess AI outputs and how accountability over AI use is monitored and enforced. That translates directly into one of the most ubiquitous phrases on AI, the Human in the Loop. Yet this can be seen as an internal control in a best practices compliance program. Human-in-the-loop controls must be real, documented, and empowered.
Demand Explainability for Boards and Regulators
The DOJ does not expect boards to understand machine learning architectures. It does expect boards to exercise informed oversight. The ECCP repeatedly asks whether compliance can explain risks, controls, and failures to senior management and the board. If a compliance officer cannot explain, in plain language, how AI affects compliance decisions, the program is not defensible. Every material AI use case should have a board-ready narrative:
- Why AI is used;
- What risks it creates;
- Where human judgment intervenes; and
- How errors are detected and corrected.
This is not optional. Prosecutors will evaluate what information the Boards reviewed and how they exercised oversight.
Integrate AI Governance Into Existing Controls
The ECCP warns against “paper programs.” This means that AI governance cannot sit in a separate policy silo. AI-related controls must integrate with existing compliance structures such as investigations protocols, reporting mechanisms, training, internal audit, and data governance. If AI identifies misconduct, how is that escalated? If AI supports investigations, how are outputs preserved and documented? If AI supports training, how is effectiveness measured? The DOJ will look for consistency in approach, documentation, and monitoring, not novelty.
Insist on Resources and Authority
The ECCP devotes significant attention to whether compliance functions are adequately resourced, empowered, and autonomous. Most typically, this is applied to CCOs and compliance professionals. It would be a logical extension that if AI governance responsibility is assigned to compliance, then compliance must have access to data, technical explanations, and escalation authority. Assigning responsibility without resources is, in DOJ terms, evidence of a program that is not applied in good faith. A forced mandate without funding or authority is itself a compliance risk.
Document the Evolution
Finally, compliance officers must document not just controls, but evolution. The ECCP repeatedly emphasizes continuous improvement, testing, and revision. AI governance will not be perfect. DOJ does not expect perfection. It expects evidence that the company identified risks, tested controls, learned from failures, and adjusted. Meeting minutes, pilot reviews, risk assessments, and remediation steps all matter. What does this sound like? Document Document Document
The Bottom Line
When AI is forced on compliance, resistance may be understandable, but at the end of the day, ineffective. The DOJ has already moved past the question of whether AI should be governed. The only remaining question is whether governance is credible. For today’s compliance officer, AI governance is no longer optional, technical, or theoretical. It is a live test of whether the compliance program is well-designed, empowered, and working in practice.
Finally, if you, as the compliance professional, are only hearing about your organization’s use of AI when this assignment is passed down, you really do need a seat at the table.








No comments yet