AI elevates compliance, or exposes it. The technology presents compliance leaders and lawyers with an extraordinary opportunity to elevate their roles, as well as an equally extraordinary risk of accountability when AI fails, misleads, discriminates, hallucinates, or generates unreliable outputs.
Organizations increasingly expect compliance to oversee AI risk in its entirety, erasing the historic boundary between enterprise and compliance risk.
At the same time, shifting enforcement priorities create new challenges. Deregulation and reduced enforcement rhetoric circulate. Budgets tighten. The value of compliance is questioned. The function is asked to “be practical,” “not slow innovation,” and “make greater use of AI.”
Managing AI risk can reposition the compliance function as indispensable to enterprise decision-making. But this is not merely an opportunity. It is a test.
When AI goes wrong, the post-mortem questions are predictable. Who assessed the risk? Who tested the controls? Who approved deployment? Who monitored performance?
A roadmap that predates the AI frenzy
What makes this moment even more striking is that a roadmap has existed for years. In 2021—before generative AI became a boardroom fixation— COSO published guidance titled “Applying the COSO Framework and Principles to Help Implement and Scale Artificial Intelligence.
Rather than invent a new regime, COSO advised organizations to draw from its Integrated Internal Control (IC) and Enterprise Risk Management (ERM) Frameworks. This article offers a step-by-step guide to leveraging COSO to manage AI Risk.
About the Authors

Jonny Frank, a partner with StoneTurn, brings over 45 years of public and private sector and law and business school teaching experience in forensic investigations, compliance, and risk management.
What is COSO—and why should compliance care?
COSO offers lawyers and compliance officers a powerful—if underutilized—framework for some of their most critical work, including compliance program and risk assessments, root cause analysis, and remediation. Typically associated with finance and internal audit, COSO’s applications extend well beyond those functions. For multinationals in particular, COSO carries a distinct advantage: it provides globally accepted standards, offering a compelling alternative to DOJ, SEC, and other U.S. government compliance program guidelines and expectations.
“AI elevates compliance, or exposes it.”
At its core, COSO provides a structured way to answer: How clearly has the organization defined its operational, reporting, compliance, and strategic objectives? Has it identified and responded to events that would impede those objectives? Are its mitigating policies, processes, and controls designed and operating effectively? How well does it maximize data and communicate information to mitigate risk? Is the system monitored, tested, and improved over time?
COSO’s principles apply to any risk or opportunity that affects strategy, operations, reporting, or compliance. AI squarely fits within that scope.
About the Authors

Michael Costa, a partner with StoneTurn, has deep experience in data analytics and data science, financial crime, investigations, complex litigation, and compliance matters.
COSO saw this coming: The 2021 AI guidance
The guidance cautions organizations against treating AI as a stand-alone technology project, instead aligning with organizational objectives and risk appetite. Roles and responsibilities must be clearly defined. Risk assessments should explicitly include AI-related exposure.
The guidance also highlights the importance of tailored control activities. Model validation, change management, human oversight, data governance, and monitoring for performance drift are not optional enhancements; they are necessary control mechanisms as AI systems scale. Documentation and transparency are critical to ensure boards, management, and stakeholders understand how AI systems operate and what limitations exist.
Finally, the guidance stresses ongoing monitoring. AI systems evolve. Data changes. Risk profiles shift.
A step-by-step guide to leveraging COSO
Compliance practitioners will likely find the IC Framework more accessible than the ERM Framework because it operates at the level of policies, processes, and control activities—the practical tools compliance officers design, test, and document every day. ERM, by contrast, is broader and more strategic, focusing on enterprise-level risk portfolio management.
Under COSO, internal control refers to processes designed to provide reasonable assurance regarding the achievement of the organization’s objectives. Typically depicted as a cube, the IC Framework organizes around three objectives (operations, reporting, and compliance), multiple organizational levels, and five components (control environment, risk assessment, control activities, information and communication, and monitoring activities). AI risk cuts across every dimension of the cube.
The control environment: Executive management sets the guardrails
The control environment sets the tone for everything that follows. Leadership must make AI risk management an enterprise priority. Without executive sponsorship, it will feel bureaucratic and non-essential.
Executive management must require that all AI use cases be disclosed, inventoried, and subject to risk review. Accountability must be explicit. The message should be clear: risk management does not obstruct business or innovation. Rather, it ensures that innovation occurs within defined guardrails.
When expectations are set at the top, AI oversight becomes a shared enterprise responsibility, not a compliance initiative. If scrutiny arises, the organization can demonstrate that governance was leadership-driven, not retrofitted after a problem surfaced.
Risk assessment: Inventory, appetite, potential events, inherent risk
Compile an AI inventory: A comprehensive AI inventory is foundational. Organizations cannot manage what they cannot see. Each use case, at a minimum, should identify the objective(s) it supports, decisions it influences, and the business and technical owners.
Under COSO, risk is not abstract; it is the possibility that an event will prevent the organization from achieving its operational, compliance, reporting, or strategic objectives. That means an organization cannot assess AI risk until it clearly defines the objective the AI is meant to serve.
Risk appetite spans two axes: the probability of the AI-related risk occurring and the business, reputational, and legal impact if it materializes.
Identify potential objective-impeding events: For each entry on the AI inventory, ask technical and business owners to identify potential events that could produce harmful outcomes. Beware of optimism bias—the tendency to assume that bad things happen elsewhere.
Identifying events requires more than intuition; it demands research into internal and external AI failures, enforcement actions, regulatory guidance, and control testing results. One- to two-hour facilitated workshops bringing together business, compliance, legal, technology, and other leaders produce insights that siloed reviews miss.
Assess inherent risk: Inherent risk asks: without controls, how severe is the AI risk? Organizations often skip this step to save time, which is shortsighted. If the risk falls within appetite, the organization can move forward without considering controls, preserving time and resources for higher-risk areas.
Control Activities: Mitigate out-of-appetite residual AI risks
Link and evaluate control activities: Control activities are policies, processes, and controls an organization uses to mitigate risks. The focus should be on the effectiveness of the control suite, not isolated controls viewed individually. The relevant question is whether, taken together, control activities reduce inherent risk to within risk appetite, not whether each individual control satisfies a standalone objective, as in a traditional audit.
Address out-of-appetite residual risks: Residual risk captures what remains after the organization applies its control activities as designed. To avoid overreliance on untested control activities, the risk assessment should indicate whether the control suite has been tested. If residual risk remains outside appetite, the organization must decide to reduce, avoid, share, or formally accept that risk.
Documenting risk response and rationale provides protection if the risk materializes because it demonstrates the organization thought, and perhaps misjudged, as opposed to not thinking at all.
Information and communication: Reporting performance and preparing for a crisis
The guidance suggests implementing a reporting process to inform internal and external stakeholders about AI performance, benefits, and risks. This reporting process is not a check-the-box exercise. In the evolving landscape of AI, it’s a critical component. Benefits and risks are changing – and in some cases leapfrogging one another – with each new AI model release and with the organization’s understanding of the models’ capabilities.
The guidance also recommends developing a crisis communications response framework and protocols to prepare for worst-case scenarios that may emerge from undesired incidents related to AI. This exercise requires a deep understanding of how stakeholders across the organization interact and influence one another. When you do it well, it strengthens both the Risk Assessment component and the Monitoring Activities component by connecting how risks emerge, who drives them, and how the organization detects and responds to them.
Monitoring activities: Test control activities effectiveness and monitor the algorithms
As a practical matter, most organizations will not have had an opportunity to test AI control activities’ design and operating effectiveness. Arrange for testing during the next compliance or internal audit testing cycle. In addition, COSO’s AI Guidance recommends developing procedures to monitor the quality and integrity of data and the algorithm, and periodically assess model performance deployment.
AI is not just another risk. It cuts across all functions. COSO provides the architecture to manage that complexity.
In a period when some question compliance’s relevance, AI risk management offers an opportunity to demonstrate indispensable value. When AI fails —and it will—compliance will be judged on the process, not solely on the outcome. COSO enables that process. It creates a record of disciplined analysis, documented judgment, and leadership oversight.



No comments yet