The U.K.’s data regulator has warned Clearview AI it could face a £17 million (U.S. $22.6 million) fine over its use of people’s data to power its facial recognition software.

The Information Commissioner’s Office (ICO) is particularly concerned that while the tech firm’s services are no longer being offered in the United Kingdom, and the company has no U.K.-based customers, evidence suggests it both has and “may be continuing to process significant volumes of U.K. people’s information without their knowledge.”

“I have significant concerns that personal data was processed in a way that nobody in the U.K. will have expected,” said Information Commissioner Elizabeth Denham in a statement.

Alongside its proposed fine for alleged serious breaches of U.K. data protection laws, namely the country’s version of the General Data Protection Regulation (GDPR), the ICO has issued a provisional notice to stop Clearview AI from further processing U.K. citizens’ personal data and delete the data it already has.

The announcement Monday follows a joint investigation by the ICO and the Office of the Australian Information Commissioner (OAIC), which focused on Clearview AI’s use of images, data scraped from the internet, and biometrics for facial recognition.

At the beginning of November, the OAIC found Clearview AI in breach of Australian privacy laws. The company said it would challenge that decision.

Clearview AI’s app allows users to upload an image of an individual’s face and match it to photos of that person’s face collected from the internet. It then links to where the photos appeared. The system is reported to include a database of more than 10 billion images that Clearview AI claims to have taken from various social media platforms and other websites where the information is publicly available.

The company says its service helps police forces around the world track criminals by matching their photos to its database. The ICO noted U.K. law enforcement agencies used the app on a “free trial basis” until the trial was discontinued.

The ICO’s preliminary view is that Clearview AI appears to have failed to comply with U.K. data protection laws by illegally collecting and unfairly processing U.K. citizens’ data without informing them what the information is being used for or having a process in place to stop the data being retained indefinitely.

The regulator also believes the company failed to meet the higher data protection standards required for biometric data (classed as “special category data” under the U.K. GDPR). Clearview AI also broke privacy law by asking for additional personal information, including photos, which the ICO believes “may have acted as a disincentive to individuals who wish to object to their data being processed.”

Clearview AI can make representations to the ICO before a final decision is made by mid-2022. Like GDPR fines against British Airways and Marriott International, the ICO’s penalty can change—or even be dropped—depending on the company’s evidence and its steps to remediate any harm.

“I am deeply disappointed that the U.K. Information Commissioner has misinterpreted my technology and intentions,” said Clearview AI CEO Hoan Ton-That in an emailed statement. “We collect only public data from the open internet and comply with all standards of privacy and law.”

Kelly Hagedorn, partner at Jenner & Block, a law firm representing the company, said in an emailed statement: “The ICO’s assertions are factually and legally incorrect. Clearview AI is considering an appeal and further action.”