Clearview AI plans to bring facial recognition software to law enforcement apps and companies


Clearview AI is expanding sales of its facial recognition software to companies primarily serving the police, he told Reuters, urging scrutiny of how the startup capitalizes on billions of photos it extracts from profiles of social networks.

The sales could be big for Clearview, a presenter Wednesday at the Montgomery Summit investor conference in California. This is fueling an emerging debate about the ethics of harnessing contested data to design artificial intelligence systems such as facial recognition.

Clearview’s use of publicly available photos to train its tool earns it high marks for accuracy. The UK and Italy fined Clearview for breaching privacy laws by collecting images online without consent, and the company this month settled with US rights campaigners over similar allegations .

Clearview primarily helps police identify people through images on social media, but that business is under threat due to regulatory investigations.

The settlement with the American Civil Liberties Union prohibits Clearview from providing social media capability to corporate clients.

Instead of online photo comparisons, the new private sector offering matches people with ID photos and other data that customers collect with the subjects’ permission. It is intended to verify identities for access to physical or digital spaces.

Vaale, a Colombian app-based lending startup, said it is adopting Clearview to match selfies to ID photos uploaded by users.

Vaale will save about 20% on costs and improve accuracy and speed by replacing Amazon.com Inc.’s Rekognition service, CEO Santiago Tobón said.

“We can’t have duplicate accounts and we have to avoid fraud,” he said. “Without facial recognition, we cannot operate Vaale.”

Amazon declined to comment.

Clearview AI CEO Hoan Ton-That said a US company selling visitor management systems to schools has also signed up.

He said a customer’s photo database is stored for as long as they want and is not shared with others or used to train Clearview’s AI.

But the face-matching that Clearview sells to companies was formed on social media photos. He said the diverse collection of public images reduces racial bias and other weaknesses that plague rival systems limited by smaller datasets.

“Why not have something more precise that avoids errors or any type of problem?” Your-That said.

Nathan Freed Wessler, an ACLU lawyer involved in the union’s case against Clearview, said using ill-gotten data is an inappropriate way to pursue the development of less biased algorithms.

Regulators and others must have the right to force companies to abandon algorithms that benefit from disputed data, he said, noting that the recent regulation did not include such a provision for reasons he could not. disclose.

“It’s an important deterrent,” he said. When a company chooses to ignore legal protections to collect data, it must bear the risk of being held liable.”

© Thomson Reuters 2022


Tech

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button