By
Aaron Nicodemus2023-06-06T12:00:00
Generative artificial intelligence like OpenAI’s ChatGPT, Microsoft’s ChatGPT-powered Bing, and Google’s Bard have a lot of potential—and risks—that require thorough assessments to implement.
But can—or should—generative AI be used by the compliance department?
Compliance professionals must determine whether any potential uses of generative AI by their employer violate state or federal laws, rules, or regulations. They should insist safeguards be implemented to prevent or detect plagiarism and the improper use of intellectual property, as well as violations of individual privacy.
2024-06-07T22:34:00Z By Adrianne Appel
Compliance has been “sleeping on” artificial intelligence, two panelists discussed at Compliance Week’s Women in Compliance Summit. The profession should be positioned to lead on AI governance at the business level.
2023-07-21T15:29:00Z By Kyle Brasseur
Technology companies including Google, Meta, and OpenAI agreed to a series of voluntary commitments they’ll make regarding their management of risks when developing artificial intelligence systems.
2023-07-06T15:33:00Z By Neil Hodge
Not all companies can rely on bans or restrictions to employee use of generative artificial intelligence like ChatGPT. Instead of telling people what they can’t do, focus on what they can do.
2025-11-20T21:55:00Z By Ruth Prickett
Geopolitical instability and a general focus on increasing growth and productivity by governments worldwide are causing a slew of regulatory changes in the financial services sector. But most firms are failing to identify potential compliance changes early enough to make meaningful decisions.
2025-11-05T20:28:00Z By Ruth Prickett
Insurance firms are warning that AI-washing could trigger a slew of cases against directors, and are adjusting their directors’ and officers’ liability premiums accordingly. With regulators cracking down on AI-washing, compliance could be a crucial line of defense and save companies on their insurance costs.
2025-10-24T18:57:00Z By Ruth Prickett
“Hallucinatory” citations and errors in an AI-assisted report produced by Deloitte for the Australian government should be a wake-up call for compliance officers about the risks of placing too much trust in AI.
Site powered by Webvision Cloud