Compliance and security are knitted together, including when applied to artificial intelligence and machine learning (ML), said AI expert Diana Kelley.

“They’re two different practices, but I see them as working closely together,” said Kelley, who serves as chief information security officer for Protect AI.

A highly sought-after speaker, Kelley will deliver the Day 2 opening keynote at Compliance Week’s National Conference in Washington, D.C. The event will take place over three days, April 2-4, and feature the perspectives of 80-plus compliance, risk, and regulatory professionals.

Kelley, who started programming in elementary school and cut her teeth in creating publishing software, will begin by discussing what AI means for organizations before keeping the conversation going on the technology’s implications for compliance and enterprise risk.

Check out the full agenda

C2024 - 390x260 CW Banner

Back for its 19th year, Compliance Week’s National Conference brings compliance, ethics, risk, legal, and audit professionals together face-to-face to benchmark best practices and gain the latest tactics and strategies to enhance their compliance programs.

Register

The topics will include AI risks that are “below the headlines,” said Kelley, whose extensive experience includes working with Microsoft and IBM. In her “free time,” she serves on half a dozen boards, volunteers her expertise, and speaks nationwide.

Kelley will give her take on how to introduce AI and ML security in ways that will “keep your company moving forward” without putting it or its customers at risk, she said.

Compliance is about meeting or exceeding a standard, while risk management includes deciding on the best approach—“and we need both,” Kelley said.

“I’m excited to see ML and AI become part of how we embrace compliance and risk,” she said.

Regarding digital technology, Kelley has been there from the early days. She broke into the tech field in the 1970s and taught herself to code a Texas Instruments calculator her dad gave her.

“I fell in love with computers and the internet and the promise of it all,” Kelley said.

Her father was a researcher at Lincoln Lab. She was given access to ARPANET, the forerunner to today’s internet, which was used mainly by the Defense Department and universities.

Later, she graduated from Boston College with a bachelor’s degree in English. Computers were still her passion but, “There weren’t classes teaching the kind of things I was interested in, tech stuff,” she said. “I wasn’t sure where it was going to go.”

Kelley entered the publishing industry, landing at Wadsworth Publishing, and was soon at the center of the early tech revolution. Her tech skills were recognized, and she was put in charge of creating software to accompany the textbooks, training the sales team about the software, and then creating networks for the different companies now under the Thomson umbrella.

“I was extraordinarily lucky,” she said. “From that point, I never looked back.”

Diana Kelley 2x3

Diana Kelley

Recently, Kelley has set her sights on AI and ML.

While related, AI and ML are different. AI encompasses a lot of different types of technology that are based on rules, not intelligence, like robots and automated vehicles. ML is based on mathematics and probability and allows a program to make predictions and detections.

A lot of generative AI is supported by ML.

Before embarking on AI solutions, businesses want to be clear about the regulations that would apply in their industry and create policies for use, among other steps. Attorneys and information technology experts can advise about privacy and security but, “You need to make sure the ones you work with have the AI expertise you need,” Kelley said.

Businesses ideally build security into their AI and ML systems at the start and have a plan in hand before launch that engineering, compliance, development, and operations have worked together to create, she said. The security program should allow for the transparency, recordkeeping, risk assessments, and auditing required for compliance with applicable regulations.

Kelley’s hope for AI guidelines underway in the European Union and United States is that they’ll be helpful and not stand in the way of innovation.

“With any new regulation that comes in, the hope is that we can create very strong guidelines and guardrails for adopters,” so companies can incorporate the requirements in ways that make sense for them, she said.