Boards need to do more to understand the ethical implications of using artificial intelligence (AI) to support the business, especially around decision making, says the IBE.

A new board briefing published by the Institute of Business Ethics (IBE) offers practical advice about how the ethical challenges of AI can be addressed and looks at the expertise that is required in the boardroom.

The guidance, called Corporate Ethics in a Digital Age, presents nine challenges around the use of AI and suggests questions for boards to help executives address key challenges, such as how AI systems can avoid bias, deal with cyber-attacks, keep data secure, treat customers, employees, and contractors fairly, and ensure accountability.

The report suggests that directors need to be able to challenge underlying assumptions about the technology, ask the right questions, insist on answers they can understand, and set limits on how AI and machine learning is used.

“Boards need to understand how AI has affected decision making and be clear about where the machine’s capability to make decisions ends,” says the guidance.

“Machines themselves are amoral. They essentially rely on rule-based analysis and cannot deal with uncertainty and the unforeseen,” says the IBE. “The tasks to which AI is applied and the nature of its decisions will reflect the values of those who employ it. This suggests that, however difficult it is to organize, it should always be possible to override an AI decision when it is clearly wrong,” it adds.

The report uses case studies to highlight the real-world dilemmas that boards are facing and suggests four ethical principles that they should consider in the way algorithms are used. These are:

  1. Reliability—Do we keep to our promises?
  2. Honesty—Do we deceive and lie to people?
  3. Transparency—Do we operate in secret and can we explain our decisions?
  4. Respect—Do we trample over the interests of others to get what we want?

“Boards have to decide where to draw the line between the opportunities of using technology to further business objectives and the risk of inadequate controls which end up infringing individual rights or otherwise endangering the company’s reputation,” says Peter Montagnon, associate director at the IBE and author of the report.

“Lapses cannot just be blamed on AI. Someone has to be accountable and, in the corporate world, accountability rests with the board. This is why it is imperative that directors know how to ask the right questions and can trust the information received,” he adds.