I recently saw an article in Bloomberg in which Catherine Thorbecke said there were six questions she wanted answered about AI in 2026. I have adapted her questions for a compliance professional to ask about their compliance program in 2026.
Artificial intelligence moved from novelty to operational reality faster than most corporate compliance programs were prepared to handle. By 2025, the conversation shifted from promise to problem, with concerns about quality, accountability, and most significantly, governance overtaking the initial enthusiasm. For compliance professionals, this is no longer an abstract technology discussion. It is a core enterprise risk issue that boards, executives, and regulators increasingly expect to be addressed with rigor, structure, and transparency.
1. What is in the training data?
Training data is no longer a technical footnote. It is a compliance exposure. If AI systems are trained on copyrighted material, biased sources, or illicit content, the downstream risk includes intellectual property violations, discrimination claims, regulatory sanctions, and reputational harm. For compliance professionals, opaque training data means unmanageable risk in regulated environments such as hiring, healthcare, financial services, and government contracting. Governance frameworks must require documented data provenance, contractual assurances from vendors, and audit rights. If the data cannot be explained, the system should not be deployed.
2. How do we define and measure “advanced” AI capability?
The industry fixation on artificial general intelligence is largely irrelevant to the compliance professional. What matters is not whether a system meets some philosophical threshold, but whether it performs tasks that create legal, ethical, or operational risk. Compliance teams should push organizations away from marketing terms and toward functional classification: What decisions does the system influence? What controls does it bypass? What authority does it exercise? Measuring AI capability should be tied to risk impact, not simply milestones. Here, you can think about the U.S. Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) and its focus on effectiveness.
3. Where is the regulation and how should companies respond now?
Outside of Europe, regulatory frameworks remain fragmented. Even the great state of Texas now has AI regulation on the books. This means fragmentation does not excuse inaction or dependence on some Trump pronouncement. History shows that regulators punish companies not for failing to predict rules, but for failing to act reasonably in the face of known risk. Compliance leaders should anchor AI governance to existing expectations around internal controls, risk assessments, third-party oversight, and board accountability. The question is not whether AI regulation will arrive, but whether the organization will be ready when it does. As a fallback, you can rely on ISO 42001:2023 as a good starting point.
4. Is the AI investment cycle creating hidden compliance risk?
The current AI investment environment resembles past technology bubbles where speed eclipsed discipline. From a compliance standpoint, inflated valuations and aggressive deployment schedules often correlate with weakened controls. When organizations rush to “keep up,” governance is treated as friction rather than protection. Compliance professionals must insist that AI initiatives follow the same lifecycle scrutiny as any other material business transformation, including a business rationale, risk assessments, escalation protocols, and defined accountability. But this does not mean compliance is the ”Land of No,” but rather operationalized risk management.
5. What is the sustainable business model, and why does it matter for compliance?
Unclear paths to profitability create compliance pressure. Vendors that are burning capital may cut corners on security, privacy, and support. Business models that depend on monetizing user data or opaque advertising raise obvious regulatory red flags. Financial instability in critical vendors is a risk factor that belongs squarely within third-party risk management. Always remember that effective compliance equates to more efficient business operations, which translates to higher profitability.
6. Will AI displace jobs, and how should organizations govern that transition?
Workforce disruption is already happening, often without a transparent explanation. Using AI as a justification for layoffs without clear role redesign, reskilling strategies, or ethical review creates cultural and legal exposure. Regulators and prosecutors increasingly examine whether automation decisions are discriminatory, retaliatory, or misleading. Part of the compliance remit, from the ECCP, is that compliance is the holder of Institutional Justice and Institutional Fairness. This means that AI-driven systems are explainable, documented, and aligned with stated corporate values.
AI does not fail in an organization because it is too powerful; it fails because governance has not kept pace with deployment. Compliance professionals have a critical role to play in restoring discipline to the conversation. The six questions will help you answer where your organization is in its AI journey. How organizations answer them in 2026 will determine whether AI becomes a sustainable business tool or the next major compliance failure.








No comments yet