For more than two decades, assurance and compliance frameworks have rested on a simple assumption: Material decisions are made by people. Post‑Sarbanes-Oxley Act (SOX) assurance reset worked because it aligned accountability with human behavior. That assumption shapes how internal controls are designed, how accountability is assigned, and how assurance is delivered.

Controls surround human judgment. Documentation explains human reasoning. Escalation mechanisms assume a specific individual or role can be identified, questioned, and held accountable when decisions are questioned.

Artificial intelligence is slowly disrupting this model, not by eliminating controls, but by introducing non‑human judgment into control environments, with governance design clearly lagging.

About the Author

Linkd_ Pic

Diana Mugambi is a senior finance and governance professional with more than 20 years of experience spanning global industrial operations and Big Four audit and advisory. She is currently a Senior Manager, FP&A Operations for North America at GE Vernova Gas Power, where she partners with commercial leadership on portfolio strategy, risk, and performance. Earlier in her career, she held roles at PwC and Deloitte, focused on audit, risk assurance, and corporate governance.

A design assumption no one revisited

The post‑Sarbanes-Oxley Act (SOX) assurance reset worked because it aligned accountability with human behavior. Management assertions, control documentation, audit trails, and remediation processes all assumed that decisions originated with identifiable operators within defined roles. That SOX design held, even though the subsequent crises, because ultimately judgment remained human, even when it was flawed.

Artificial Intelligence (AI) changes the origin point.

As automated systems increasingly influence forecasting, analytics, transaction approvals, and contract interpretation, judgment no longer resides exclusively with people. It is embedded upstream in training data, model logic, thresholds, and exception handling, and this happens often long before compliance or audit functions are engaged. The control framework, however, remains largely unchanged.

Much of today’s discussion around AI and compliance focuses on extending existing frameworks. Practitioners explore continuous SOX testing, expanded control coverage, improved documentation, and responsible‑AI principles to keep systems auditable. These efforts matter. They show a profession progressively and actively adapting its tools. However, much of this discourse starts from an unchallenged premise: That the assurance model itself remains sound, and AI simply needs to be governed within it.

What receives less attention is whether this premise still holds.

Post‑SOX frameworks assume decisions can be documented, challenged, escalated, and attributed to an operator or role. AI complicates this not because it lacks controls, but because it embeds judgment that is distributed, probabilistic, and often opaque by design. Extending controls may improve coverage, but it does not resolve the underlying mismatch between how decisions are made and how accountability has traditionally been enforced.

This is where compliance and audit functions increasingly feel the strain.

When judgment moves, but controls stay put

In many organizations, AI systems are introduced as efficiency tools rather than governance decisions. Speed, consistency, and scale are prioritized. Controls are evaluated after deployment. Assurance is expected downstream. Compliance and audit teams are asked to validate outcomes without visibility into the judgment embedded in the systems that produced them. Control testing confirms execution, but the explanation becomes harder. When questions arise during internal reviews, regulatory inquiries, or board discussions, the issue is rarely framed as a system‑design problem. It has become an accountability problem. Who owns the decision when no single person made it?

This tension does not reflect a failure of SOX. It is a reflection of its design boundaries. SOX‑era controls assume human decision‑makers, explainable reasoning, and role‑based ownership. AI introduces decision‑making that is distributed, adaptive, and difficult to interpret in human terms.

Unless governance frameworks evolve alongside deployment, organizations risk operating control environments that appear robust but lack visibility as to how decisions are made. In this scenario, compliance and audit functions inherit accountability without authority and become responsible for outcomes shaped by logic they cannot fully interrogate.

Old assumptions, new exposure

The risk is not automation itself. It is allowing judgment to migrate into AI models while governance assumptions remain anchored to human decision-making models.

Past assurance resets were triggered by failures of trust. These were moments when existing frameworks could no longer explain how decisions were made or defended under scrutiny. AI has not yet produced a defining crisis, but the conditions that challenge accountability, explainability, and ownership are already in place.

For compliance, risk, and audit leaders, the question is no longer whether AI will expand. It will.

The more pressing question is whether governance assumptions will be revisited before they are tested.

If assurance frameworks cannot explain how decisions are made, they cannot defend them. And when trust is questioned, explanation and not efficiency is what ultimately matters.