Technology has played a major part in helping companies improve performance and boost productivity, and artificial intelligence (AI) looks set to find even greater efficiencies—though perhaps at a higher human cost.

There have long been concerns about the impact AI-based systems can have in the workplace. In recent years, companies have run afoul of regulators by using AI tools for monitoring performance to the point of surveillance.

In 2021, food delivery companies Deliveroo and Foodinho were each fined by Italy’s data protection authority because their apps’ algorithmic rating systems—which used mathematical formulas to prioritize or penalize riders depending on how many jobs they accepted, fulfilled, completed on time, or rejected—were allegedly biased and violated the European Union’s General Data Protection Regulation (GDPR) principles around transparency and lawfulness of processing.

In 2020, the U.K. Information Commissioner’s Office (ICO) began an investigation into Barclays Bank over allegations it had effectively been spying on employees via a software system called “Sapience” that allowed it to track worker productivity by monitoring computer usage. In 2017, the bank faced similar criticism when it rolled out a system known as OccupEye that tracked how long people spent at their desks.

Since the Covid-19 pandemic, with employees more commonly working from home, monitoring tools have become increasingly common—and resented.

In August, the EU’s agency for occupational safety and health released a report examining the risks and opportunities of AI-based worker management systems for employee’s physical and mental wellbeing.

The report found while AI can provide potential opportunities for improving workers’ occupational health and safety by enabling better monitoring of hazards and employees’ mental health, the technology can also make workers feel like they are losing control over their jobs and create an unhealthy, pressurized environment where there is little or no transparency about how decisions are made or how they can be challenged.

Workers can become “dehumanize[d],” the report said.

It also found AI usage can create mistrust, limited worker participation, and a blurred work–life balance, as well as cause negative impacts for workers’ physical and psychosocial wellbeing, including musculoskeletal and cardiovascular disorders, fatigue, stress, anxiety, and burnout.

“What AI says is the most efficient and cost-effective method of doing something does not necessarily translate into the most efficient and cost-effective method for a human to do it.”

Sarah Edwards, Senior Employment Law Adviser, Howarths

The report suggested employers ensure a strong “prevention through design” approach from the start.

Legal experts agreed employers should use strict protocols to ensure AI-driven technologies do not harm employees’ health.

“Humans are not machines, and they don’t always work in what a data-driven process would consider a more efficient way,” said Sarah Edwards, senior employment law adviser at law firm Howarths. “What AI says is the most efficient and cost-effective method of doing something does not necessarily translate into the most efficient and cost-effective method for a human to do it.”

“There must be a process of human intervention to put the data and suggestions in context and take account of the people that will ultimately be carrying out the work,” Edwards added. She warned if algorithms push workers beyond what is possible, they will become stressed and lose motivation.

“It leads to burnout, then sickness absence, then reduced performance and productivity, which in turn increases costs. Then, a business is back to square one,” she said.

Malcolm Gregory, partner and head of employment law at law firm RWK Goodman, said, “Using AI to focus employees on meeting targets, completing tasks, or other general management is fine to a point. But employers should remember they cannot simply set an algorithm running and walk away from any legal responsibility for the wellbeing of their staff. Employers still have a legal obligation to provide a safe system of work for their employees, and this extends to AI-driven work processes.”

Natalie Cramp, chief executive officer of data science company Profusion, said companies should not impose a new way of working on employees without informing or consulting them or getting their buy-in to show them the benefits.

“It’s essential to show employees the benefits of AI in supporting or improving their work. This means looking at how new AI systems can enhance existing processes and behaviors,” she said. “The best way to achieve this is to consult employees—ask them what they need and bring them along on the decision-making journey.”

There also must be transparency and the ability to challenge how AI works.

“This puts power back in the hands of employees,” said Cramp. “Employees need an element of control—for example, the right to appeal the data used in decision-making. Otherwise, they will quickly grow to distrust it.”