The U.K. Information Commissioner’s Office (ICO) fined Clearview AI more than 7.5 million pounds (U.S. $9.4 million) for collecting people’s images from internet and social media sites without their knowledge or consent to create a global online database that could be used for facial recognition by law enforcement.

The ICO also ordered the company to stop scraping U.K. citizens’ publicly available data from the internet and for any data it already has to be deleted.

The enforcement action follows a joint investigation concluded last November with the Office of the Australian Information Commissioner regarding Clearview AI’s practices.

At the time, the ICO proposed a £17 million fine (then-U.S. $22.6 million), suggesting that—like the British Airways and Marriott fine reductions—making representations with the U.K. regulator pays off. The official enforcement and monetary penalty notices are due to be published later this week.

Clearview AI’s service allows customers, including the police, to upload a person’s image to the company’s app and check for a match against more than 20 billion images in its database. The app then provides a list of images that have similar characteristics to the photo with a link to the websites from where those images came from.

Although the company no longer offers its services to U.K. organizations, many of the images within the database are likely U.K. citizens, the ICO said, which are accessible to customers in other countries. Therefore, their personal data is being collected and sold without their knowledge or consent.

In a statement, U.K. Information Commissioner John Edwards said Clearview AI “not only enables identification of those people but effectively monitors their behavior and offers it as a commercial service. That is unacceptable.”

Edwards added, “People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement. Working with colleagues around the world helped us take this action and protect people from such intrusive activity.”

The ICO found Clearview AI breached U.K. data protection laws by failing to use U.K. citizens’ data transparently, fairly, and with suitable legal basis. The company also failed to meet the higher data protection standards required for biometric data—classed as “special category data” under the European Union’s General Data Protection Regulation (GDPR)—and did not have a process in place to stop the data being retained indefinitely.

Further, when people asked the company if their details were on the database, Clearview AI allegedly asked them for more personal data—including photos—to check. The ICO suggested this tactic may have acted as a disincentive to individuals who wished to object to their data being collected and used.

Clearview AI maintained it has done nothing wrong, saying its technology and intentions have been “misinterpreted.”

In a statement, Lee Wolosky, partner at law firm Jenner and Block who acts as a spokesman for Clearview AI, said, “The decision to impose any fine is incorrect as a matter of law. Clearview AI is not subject to the ICO’s jurisdiction, and Clearview AI does no business in the U.K. at this time.”

In February, Italian regulator Garante fined Clearview AI 20 million euros (then-U.S. $22 million) for GDPR violations. France’s CNIL has also threatened to fine the company over its use of French citizens’ data.

In 2021, the Swedish Data Protection Authority fined the Swedish Police Authority 2.5 million Swedish Krona (then-U.S. $300,000) over illegal police use of the technology.

Editor’s note: This story was updated May 24 to correct the conversion of £7.5 million to U.S. dollars in the headline and first sentence.