Clearview AI founder and CEO Hoan Ton-That has previously boasted that his now-notorious facial recognition software relies on a database of over 10 billion images. But now, thanks to a ruling from Australia’s national privacy regulator, the company that some have glibly warned could “end privacy as we know it” will have fewer data points in a country the founder once called home.
That’s the determination from the Office of the Australian Information Commissioner (OAIC), which found the company’s data scraping practices breached Australian privacy and violated the Australian Privacy Act 1988. Now, per the ruling, Clearview will be forced to cease the collection of facial images and destroy any existing images and face templates collected from Australia.
In a statement, Australian Information Commissioner and Privacy Commissioner Angelene Falk said the “covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” claiming it “carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”
The ruling marks one of the most significant blows to Clearview AI yet and comes one year after the company was pressured into retreating from Canada following a pair of federal investigations into the business.
But this one strikes closer to home.
In an emailed statement to Gizmodo, Clearview Founder and Australian citizen Hoan Ton-That said he was “disheartened” by the agency’s ruling and said it represented a misinterpretation of its value to society.
G/O Media may get a commission
“I grew up in Australia before moving to San Francisco at age 19 to pursue my career and create consequential crime fighting facial recognition technology known the world over,” Ton-That said. “I am a dual citizen of Australia and the United States, the two countries about which I care most deeply. My company and I have acted in the best interests of these two nations and their people by assisting law enforcement in solving heinous crimes against children, seniors, and other victims of unscrupulous acts. We only collect public data from the open internet and comply with all standards of privacy and law. I respect the time and effort that the Australian officials spent evaluating aspects of the technology I built.”
According to the Australian agency, Clearview violated the country’s privacy protections in four key ways: First, the company collected data en masse without users’ consent, which thanks to its reliance on third-party social media data, is pretty much a given. Second, the agency claimed Clearview collected this data through “unfair means,” and failed to notify users their personal information had been collected. Finally, the agency claims Clearview didn’t take reasonable steps to ensure the data that was collected was indeed accurate, and also didn’t take reasonable steps to ensure compliance with the Australian Privacy Principles.
More crucially, Falk noted that the risks associated with the mass collection of biometric data simply aren’t proportional with Clearview’s stated goal of combating crime.
“When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes,” Falk said. “The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”
In a statement sent to Gizmodo, Clearview attorney Mark Love disagreed with the agency’s conclusion and claimed it lacks jurisdiction. “To be clear, Clearview AI has not violated any law nor has it interfered with the privacy of Australians,” Love said. “Clearview AI does not do business in Australia, does not have any Australian users.”
Even if it is the case that Clearview does not sell its facial reaction service to Australians, it’s almost certainly the case that some Australian faces find themselves caught up in the company’s 10 billion image dragnet.
Australia’s ruling marks the culmination of a joint investigation launched in partnership with the U.K. Information Commissioner’s Office dating back to June 2020. Since then, calls by privacy advocates and lawmakers to curb Clearview’s reach have heated up around the world.
Earlier this year, privacy groups in Austria, France, Greece, Italy, and the UK took legal action against the company, filing complaints with their respective data protection authorities. One of those groups, U.K.-based Privacy International, released a statement at the time alleging Clearview “contravenes a number of other GDPR principles, including the principles of transparency.”
Meanwhile, in the US a bipartisan group of lawmakers recently proposed new legislation that would ban police from buying illegally gathered data from brokers, naming Clearview AI. In a press release, lawmakers supporting that bill accused Clearview of using “illicitly obtained photos to power a facial recognition service it sells to government agencies, which they can search without a court order.”