The more that companies rely upon artificial intelligence in their business operations, the more vital a role chief ethics and compliance officers play in ensuring that the use of such technologies aligns with their organization’s mission, core values, and regulatory requirements.

Managed the right way, the opportunities for application of artificial intelligence (AI) are endless—but managed the wrong way, so are the legal, regulatory, reputational, and financial risks. “A myriad of opportunities to leverage AI highlight why an ethical mindset is critical to protect an organization from unintended, unethical consequences,” Maureen Mohlenkamp, a principal in Deloitte’s risk and financial advisory practice, said during a recent Deloitte Webcast on AI ethics.

In broad terms, artificial intelligence encompasses technologies that are designed to mimic human intelligence. Because AI’s application is still in its early stages, companies across all industries have only just begun to scratch the surface of its full potential in the business world.

“A myriad of opportunities to leverage AI highlight why an ethical mindset is critical to protect an organization from unintended, unethical consequences.”

Maureen Mohlenkamp, Principal, Risk and Financial Advisory Practice, Deloitte 

In the financial services industry, for example, banks are using machine-learning algorithms to sift through vast oceans of data to uncover anomalies and possible fraud scenarios in payment transactions in real-time. In healthcare, hospitals are using AI to more accurately diagnose and treat patients. In the transportation industry, AI is being used to create self-driving vehicles intended to reduce accidents and—eventually—replace human drivers for some businesses (think trucking, shipping, ride-sharing, etc.).

Enter AI ethics

“AI ethics is about integrating ethical constructs into how organizations develop new technologies,” Mohlenkamp said. Chief ethics and compliance officers (CECOs) play a very important supporting role in this process. Consider the six key steps below as you think about developing your company’s AI ethical framework.

1. Develop an AI Code of Ethics. Many companies as a matter of practice include in their Code of Business Conduct reflection questions to support individual decision making in a wide variety of risk areas. This same idea could be applied in a similar manner to questions around the ethical use of AI. Examples of reflection questions to include might be:

  • How is artificial intelligence used in my specific job function, and how does AI help me achieve that?
  • What consent do I need (from customers, employees, etc.) around that data?
  • What third parties will be handling sensitive data, and for what purpose?
  • Does that purpose align with the organization’s core mission and values?

“The challenge is to ensure that the guidance provided on this topic does not become so specific that it is silo-bound and simply reflects the nature of the department that has introduced it,” Guendalina Dondé, head of research at the Institute of Business Ethics, told Compliance Week. “Issues can and should extend across different departments and activities.”

What’s also important is for the company to recognize what expertise it needs and be willing to seek it out—data scientists, software engineers, analytics experts. Tae Wan Kim, associate professor of business ethics at the Tepper School of Business, Carnegie Mellon University, put it this way: “There are computer scientists who are interested in ethics, and there are ethicists who are interested in computer science … but it’s not easy to find one single person who can address these two aspects at the same time.”

It’s also important that a speak-up culture be in place that complements an AI ethics policy, Dondé said. And those responsible for fielding employee concerns and complaints should be aware of any potential ethical lapses created by AI, not unlike any other risk.

Ali Shah, head of technology policy at the U.K. Information Commissioner's Office, discusses why it's fair to describe AI as the "next big thing" during a session at Compliance Week Europe.

2. Embed an ethical framework into AI. “Given the rapid adoption of AI in business, there is the risk that the governance systems required to mitigate the potential risks of its deployment are overlooked,” Dondé said. This would be a mistake. The ethics team needs assurance that the AI systems align with the company’s core values, while legal and compliance needs assurance that the company complies with relevant rules and regulations, especially concerning data privacy and cyber-security.

“This is going to require a team approach, with different lenses of expertise and different areas of focus both inside and outside the organization,” said Christopher Adkins, executive director of the Notre Dame Deloitte Center of Ethical Leadership, who spoke on the Deloitte Webcast. “We really need to think from the beginning about, what is our design mindset? Not just what can be built, but what should be built.”

Consider creating internal workshops or working groups that bring together different departments and functions—led by IT, data-security, and privacy, in collaboration with ethics and compliance, HR, risk, legal, procurement, and senior management—to share AI-related issues from various perspectives. In conducting an AI impact assessment, questions to explore may include:

  • How is the company using AI?
  • Where does this happen within the organization?
  • What job functions should be thinking about AI ethics?
  • What data is being fed into the algorithms?
  • Does the AI solution’s intended purpose align with the organization’s mission and values?
  • How do you get consent around the data? Do customers need to be informed, for example, that you’re capturing their data?
  • Is there a reporting process in place to escalate issues concerning ethical lapses in AI?

Think of it as an AI ethics-by-design framework. Much like privacy-by-design, which is thinking about data protection and privacy controls from the outset, AI ethics-by-design is thinking about the ethical use of AI data and technology at the outset.

The board and C-suite’s role in ethics

As more companies increase their use of artificial intelligence (AI) for risk management and compliance efforts, it is imperative that the C-suite and the board not only play an active role, but also have visibility into how their companies manage the use of AI.

 

A recent online poll conducted by Deloitte of 565 C-suite and other executives at companies that use AI found that nearly half (48.5 percent) expect to increase their use of AI for risk management and compliance efforts in the year ahead. Yet, only 21 percent said their organizations have an ethical framework in place for AI use within risk management and compliance programs.

 

As with all ethical considerations, the C-suite and board must drive that conversation from the start. That tone needs to be established before the company leverages any new AI solutions and products.

 

“C-suite and board executives need to ask questions early and often about ethical use of technology and data—inclusive of and beyond AI—to mitigate unintended and unethical consequences,” said Maureen Mohlenkamp, a principal in Deloitte’s risk and financial advisory practice. “Further, a board-level data committee should be established to discuss enterprise-wide AI use, monitoring, and modeling with appropriate C-suite leaders.”

 

The good news is that companies are more likely than not to involve top leaders in developing ethical AI practices. More than half of respondents (53.5 percent) in the Deloitte poll said that AI ethics responsibilities are established with input from the C-suite.

 

Boards and C-suite themselves also appear to care more about the ethical use of AI—made apparent by the pointed questions they are starting to ask, Mary Galligan, a managing director in Deloitte’s risk and financial advisory practice, said during a recent Deloitte webcast on AI ethics.

 

Boards and C-suites are particularly concerned about—and are focusing on—data privacy matters as they relate to AI use and what consequential legal, financial, and reputational damage could result if not managed appropriately. “Board members are acutely aware of the consumers’ demand for the ethical use of their data, as well as the ethical use of the AI processes and solutions being free from bias,” Galligan said.

 

Examples of the sort of questions they are starting to ask include:

 

  • Do we have legal authority to incorporate personal data into AI solutions?
  • What is the company’s moral obligation around the data that is being collected and the solutions that are being developed?
  • Just because we can collect large amounts of personal data, should we?
  • From where is the data coming?
  • Do consumers know we’re collecting their data, and do they understand how we’re using that data?
  • How will the company be publicly perceived by the way it uses AI data?

 

“They want to know that the use of AI won’t lead to unintended, unethical consequences,” Mohlenkamp said. “They are asking what they should consider as a responsible approach to ethical decision making. They’re asking how to put AI ethics into action. And in today’s increasing regulatory environment, they want to know what ethical dilemmas or legal and regulatory hurdles they might encounter as they scale the use of AI.”

 

—Jaclyn Jaeger

3. Conduct an AI ethics gap analysis. The next step should be to test and monitor the data to ensure that it’s of sound quality and to reduce the risk of inherent biases and inaccuracies. “The objective of zero bias is unlikely to be realized. That is true with humans, with machines, or a combination of both,” said Nicolas Economou, chair of the law committee of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. But companies can develop ways to determine the impact that algorithms will have and the extent to which the processes they have in place produce desirable effects, he says.

For example, many large companies today are using AI in their hiring practices to sift through résumés and narrow them down to the top job candidates. Here, an analysis could be performed to ensure that the résumés received through the AI process align with the decisions that HR would have made, Economou said.

Real-world scenarios provide cautionary tales about what can happen when proper testing and monitoring is not done. Amazon, for example, once tried using an algorithm in its hiring practices by training computer models to vet the best job candidates, but because the algorithm was based on historical job data in the technology industry, it inherently favored men over women.

“AI ethics is as much about understanding the risks as it is about establishing a process for avoiding them,” Mohlenkamp said. “Review existing organizational policies, procedures, and standards to address existing gaps, then expand existing policies or build new ones accordingly.”

4. Conduct due diligence on third parties. It is also prudent to monitor any third parties that handle sensitive data to ensure that they commit to similar ethical AI standards. “The design of these systems might be outsourced, and it is important to conduct ethical due diligence on business partners,” Dondé says.

“A similar principle applies to clients and customers to whom AI technologies are sold,” Dondé said. “Testing a third-party algorithm in a specific situation is also important to ensure accuracy.”

5. Educate and train. “It is not realistic to expect that every company can train every employee to become experts at AI,” Economou said. Instead, focus on training and educating those who make extensive use of AI in their job functions. Make sure they know which fundamental questions to ask and whom to ask, and what answers reasonably make sense. Also, those developing algorithms and managing data will need to be specially trained to identify and mitigate bias within AI applications.

AI competence builds confidence. Users of AI should be competent enough to understand its limitations, understand where weaknesses or biases in the data may be located and how to correct for them, ensuring that the decisions being made by AI technologies are consistent with those that an expertly trained, qualified employee would have made. “That’s a nascent challenge, that there is more AI than competent people to utilize it across the board,” Economou said.

“Employees and other stakeholders need to be empowered to take personal responsibility for the consequences of their use of AI,” Dondé said. They need to be provided with not only the technical skills to build or use AI, but also understand the potential implications that it can have, she said.

6. Establish accountability. Accountability is another important consideration in the AI ethics process. Companies simply do not have the prerogative to blame ethical lapses on AI systems. Business leaders, regulators, enforcement authorities, customers, and other stakeholders will demand full transparency and accountability, and accept nothing less. “You can’t hold a system accountable,” Economou said. “You have to hold humans accountable.”

The difficult question, however, is who should be held accountable when an AI system produces an unethical outcome, whether that outcome is intentional or not? “You need to be able to map out the accountability,” he said. “Who is accountable and responsible for what decision?” It’s a complex question with no easy answer.

Accountability should also extend to third-party service providers and vendors. The IBE recommends including in contracts with third parties a clause defining each party’s responsibilities and limitations. “Although it is not always practicable or comprehensive and it can’t substitute for individual empowerment, this can help to prevent a situation where all parties have shared responsibility and, therefore, it becomes difficult to attribute accountability appropriately,” Dondé said.

Finally, it’s important for CECOs to stay on top of the latest developments in AI ethics. The National Institute for Standards and Technology, for example, recently announced that it is developing standards for the use of AI. The Council of Europe, too, is currently working to develop a certification program and legal framework for use of AI application. Such guidance will serve as real-world, practical instruments that CECOs can turn to in their important quest to help advance the ethics of AI.