By
Ruth Prickett2026-01-06T12:00:00
AI mistakes can lead to viral news stories and, sometimes, big legal bills. How can compliance managers learn from past mishaps and protect their organizations as AI becomes increasingly integrated into every part of our working lives? We asked experts what compliance should do to make sure AI toes the line in 2026.
You are not logged in and do not have access to members-only content.
If you are already a registered user or a member, SIGN IN now.
2026-02-05T23:12:00Z By Adrianne Appel
The Financial Industry Regulatory Authority (FINRA) has welcomed artificial intelligence (AI) with open arms—and also caution.
2026-01-22T17:36:00Z By Diana Mugambi CW guest columnist
For more than two decades, assurance and compliance frameworks have rested on a simple assumption: Material decisions are made by people. Post‑Sarbanes-Oxley Act (SOX) assurance reset worked because it aligned accountability with human behavior. That assumption shapes how internal controls are designed, how accountability is assigned, and how assurance is ...
2026-01-20T20:25:00Z By Tom Fox
As artificial intelligence reshapes business, compliance teams face new questions about risk and oversight. These are the key issues compliance professionals should be asking as they evaluate their programs heading into 2026.
2026-03-19T14:43:00Z By Tom Fox
A sweeping proposed federal procurement clause would push AI oversight out of policy decks and into compliance operations, vendor management, and real-time control testing.
2026-03-13T15:48:00Z By Tegan Gebert, Chris Audet and Doug Eckstein, CW guest columnists
New Gartner research reveals why traditional risk management is failing to keep pace with modern risks, and outlines how compliance leaders must enable organizational risk owners to build an instinctive Risk Reflex.
2026-03-12T20:37:00Z By Jonny Frank and Michael Costa, CW guest columnists
AI elevates compliance, or exposes it. The technology presents compliance leaders and lawyers with an extraordinary opportunity to elevate their roles, as well as an equally extraordinary risk of accountability when AI fails, misleads, discriminates, hallucinates, or generates unreliable outputs.
Site powered by Webvision Cloud