Facial recognition technology, under assault for alleged biases and misuse by law enforcement, could be facing a moment of reckoning.
IBM announced Monday it has discontinued research and sale of its general facial recognition tools and accompanying software, telling Congress it believes the technology should not be used “for mass surveillance, racial profiling, violations of basic human rights and freedoms.”
“IBM no longer offers general purpose IBM facial recognition or analysis software,” wrote CEO Arvind Krishna in a letter posted on the firm’s website. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” Krishna wrote all artificial intelligence (AI) should be tested for bias, “particularity when used in law enforcement, and that such bias testing is audited and reported.”
In the place of facial recognition technology, IBM recommends national policy “should encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.”
It will be interesting to see if other large players in the facial recognition space—Apple, Facebook, and Google, although there are dozens more—will follow IBM’s lead and cast the technology aside.
More likely, IBM’s announcement may cause tech companies—which have been struggling to find the right corporate tone in response to Black Lives Matter rallies—to completely rethink the way facial recognition technology works and how it should be used.
On Wednesday, Amazon announced a one-year ban on police use of Rekognition, its facial recognition software.
“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge,” Amazon said in a blog post. “We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”
A day later, Microsoft announced that it also won’t sell facial recognition software to police until a federal law is passed.
IBM, Amazon and Microsoft’s moves come as facial recognition technology has come under fire for racial biases that have been found to be baked into its algorithms.
One study by the National Institute of Standards and Technology found 139 facial recognition algorithms studied misidentified African American and Asian faces at a rate 10 to 100 times greater than Caucasian faces. A 2018 study on facial recognition software by researchers Joy Buolamwini and Timnit Gebru revealed the extent to which many such systems (including IBM’s) were biased. “This work and the pair’s subseq, ent studies led to mainstream criticism of these algorithms and ongoing attempts to rectify bias,” according to a story in The Verge.
Privacy concerns are tantamount as well.
Fears of the Chinese government’s wide-ranging use of facial recognition technology to monitor its own citizens has been highlighted by publications including The Atlantic. But western governments and police have been using facial recognition to find suspects and monitor citizens, using technology that is almost completely unregulated.
And there has been blowback. Clearview AI, a company whose business model is selling access to law enforcement agencies for its facial recognition database of over 3 billion images scraped from social media, has been issued numerous cease and desist orders and is at the center of a number of privacy lawsuits. Facebook in January announced it would pay $550 million to settle a class-action lawsuit over its unlawful use of facial recognition technology.
So criticism of facial recognition technology is not new. What is new is the impetus for IBM’s decision, which has to be viewed through the lens of the moment. It is a corporate reaction to the Black Lives Matter movement that has been re-energized following the death of African American man George Floyd at the hands of police in Minneapolis.
What may follow is a complete rethinking of how facial recognition technology should be used by law enforcement. Recently, the Black Lives Matter movement and protestors marching throughout the country have demanded that police be “defunded.” That doesn’t necessarily mean that police departments should be disbanded, but rather reorganized and streamlined. The argument goes that responsibilities that have for years defaulted as police matters would be more appropriately handled by other governmental and community organizations better positioned to handle them without bias or violence.
Why are police in charge of traffic enforcement and parking regulations? Why are officers assigned to public schools? Why are they paid to provide security at construction sites and large private events? Why are police departments tasked with enforcing public health rules during the coronavirus pandemic?
If it is true that facial recognition technology is being misused by a police system built on racial profiling and targeted brutality, then perhaps the technology itself should also be overhauled. If racial biases are baked into facial recognition algorithms, then the algorithms need to be discarded and rebuilt from scratch as something completely new.
It’s easier to discard algorithms than it is to eviscerate police department budgets and lay off police officers in the name of reforming a rotten system. Perhaps new guidelines and guardrails placed on facial recognition technology—to make the technology more accurate and less biased—could be applied to the much more difficult task of reforming the police.
Said another way, if government establishes regulations for the proper use of facial recognition software to squeeze out bias and racial profiling, could it not use those same principles to reform the country’s police departments? The two reform efforts may not be as different as they first appear.