fbpx
Data

Clearview AI ordered to delete all images of Australians scraped from social media by Privacy Commissioner

- November 3, 2021 3 MIN READ
Facial recognition
Photo: AdobeStock
Clearview AI, the controversial US-based facial recognition software company co-founded by Australian entrepreneur Hoan Ton-That, has been ordered to delete all Australian images on its database following multiple breaches of the country’s privacy laws.

A 15-month joint investigation by the Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) into Clearview AI’s use of data scraped from the internet found the company’s practices were “unreasonably intrusive and unfair”.

Privacy Commissioner Angelene Falk said Clearview AI breached the Privacy Act because it scraped biometric information from the web for its facial recognition tool without consent, and by unfair means. Additionally she found the company did not take reasonable steps to notify people their personal information had been collected or ensure what was disclosed was accurate.

The company has come under increasing pressure from social media companies and privacy regulators since a 2020 New York Times expose titled “The Secretive Company That Might End Privacy as We Know It“.

The software is being used by law enforcement officials around the world, including the FBI and US Homeland Security. Australia police forces also trialed its facial recognition tool over six months over 2019 and 2020.

The company’s database has more than three billion images taken from social media platforms and other public websites. Clearview claimed its data was only sold to law enforcement but a leak of its client list revealed it was also being used by commercial organisations.

Twitter, Facebook and YouTube sent Clearview requests to cease and desist and delete their data.

The Australian privacy investigation found that the privacy impacts of Clearview AI’s biometric system were not necessary, legitimate and proportionate in terms of public interest benefits.

“When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes,” Commissioner Falk said

“The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”

 

A lack of protection

Commissioner Falk said Clearview AI’s practices fall well short of expectations for the protection of personal information.

“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” she said

“It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.

“By its nature, this biometric identity information cannot be reissued or cancelled and may also be replicated and used for identity theft. Individuals featured in the database may also be at risk of misidentification.”

The OAIC is currently finalising an investigation into the Australian Federal Police’s trial use of the technology and whether it complied with the privacy code for government agencies.

But the determination released today concluded that what Clearview did in commercialising the data of individuals was outside reasonable expectations.

The company argued that it did not use “personal” information and it fell outside the jurisdiction of Australia’s Privacy Act because it’s headquartered in the US. The company also claimed it stopped offering its services to Australian law enforcement shortly after the OAIC investigation began.

Commissioner Falk rejected Clearview’s argument because it was collecting sensitive biometric information from Australians on a large scale, for profit.

“These transactions are fundamental to their commercial enterprise,” she said.

“The company’s patent application also demonstrates the capability of the technology to be used for other purposes such as dating, retail, dispensing social benefits, and granting or denying access to a facility, venue or device.”

Falk said the case reinforces the need to strengthen protections amid a current review of the Privacy Act, including restricting or prohibiting practices such as data scraping personal information from online platforms.

“It also raises questions about whether online platforms are doing enough to prevent and detect scraping of personal information,” she said.

The full determination is on the OAIC website.

The UK’s ICO is considering its next steps and any formal regulatory action that may be appropriate under the UK’s data protection laws.

Canada’s privacy regulator made a similar determination to Australia’s earlier this year. The EU has also been investigating the company.

Clearview plans to seek an administrative review of the Australian Privacy Commissioner’s ruling.

The ruling comes as Facebook’s parent company, Meta, announced it will cease the use of facial recognition software on the social media platform and delete the facial templates of more than a billion people.

The company said that decision won’t apply to its metaverse products.

NOW READ: Australian police are using the Clearview AI facial recognition system with no accountability