Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

AFP called out for trialling controversial facial recognition software

The Australian Federal Police (AFP) is once again dabbling in facial recognition technology, with two platforms having been tested for operational use.

user icon Daniel Croft
Wed, 25 Oct 2023
AFP called out for trialling controversial facial recognition software
expand image

Despite early use of the Clearview AI facial recognition technology being ruled as breaking Australian privacy laws by the Office of the Australian Information Commissioner, the AFP has begun testing two new facial recognition search engines – PimEyes and FaceCheck.ID.

Speaking with The Guardian, the AFP said that a small cohort had used the two platforms as part of trials to see whether they would be suitable for use in a “law enforcement or criminal environment”, adding that neither platform had been endorsed for AFP use.

Both PimEyes and FaceCheck.ID allow users to track down people based on their online presence. Users need just upload a photo of an individual’s face and will then be able to scour through a database of similar images.

============
============

PimEyes said it uses facial recognition technology and is more than just a reverse-image search tool.

“In the results, we display not only similar photos to the one you have uploaded to the search bar but also pictures in which you appear on a different background, with other people, or even with a different haircut,” it said.

PimEyes chief executive Giorgi Gobronidze said the subscription-based service has a database of almost 3 billion faces and allows for roughly 118,000 searches a day. The platform is advertised as a way for individuals to find published photos of themselves and delete any unwanted and “illegal usage of your image”.

Unfortunately, there are no measures in place to prevent the platform from being misused, the consequences of which could be dire.

During a Senate estimates hearing on the 23 and 24 October, Greens Senator David Shoebridge revealed that the “AFP had some 35 connections with FaceCheck.ID and 742 connections with pimeyes.com”, with the AFP saying that these instances involved 27 users.

AFP chief operating officer Charlotte Tressler said that in the case of eight users, the visits to these websites were used for training “to understand the existing open-source AI tools available online and did not involve use or access of personal information”.

Tressler added that there were a total of 10 occasions where the sites were visited for this purpose – nine for PimEyes and one for FaceCheck.ID.

Shoebridge reminded estimates that PimEyes is a particularly risky platform, being run out of the former Soviet republic of Georgia, and that it has been “repeatedly criticised for enabling unlawful surveillance and stalking”.

“This keeps happening with the AFP, whether it’s Clearview, PimEyes or FaceCheck,” he said.

Responding to Shoebridge’s concerns, Tressler stressed that the use of either platform is not endorsed by the AFP.

“The AFP has not endorsed the use of either of these platforms, which is why we have been, as a matter of priority, doing a review of all use of the platforms and, of course, ensuring that that does not occur again,” she said.

According to The Guardian, the Australian Border Force also accessed PimEyes between 1 January and 26 July this year; however, Home Affairs acting group manager for technology and major capability Radi Kovacevic has said that nobody’s image was uploaded.

The AFP has previously been in hot water with facial recognition technology with its use of the controversial Clearview AI.

Clearview AI became infamous worldwide when it was discovered that its database of faces was created through the scraping of billions of images of people without their consent.

The tool that was created as a result can identify individuals by cross-referencing these faces, presenting personal information such as name and location.

The AFP first trialled the technology in 2020 when the co-founder and CEO of Clearview AI, Hoan Ton-That, offered the technology to law enforcement worldwide.

A year later, information and privacy commissioner Angelene Falk ruled that the technology broke Australia’s privacy law due to the way it collected information without permission and through unfair means. In addition, the AFP was slammed for using the tech.

“Commissioner Falk found the AFP failed to complete a privacy impact assessment (PIA) before using the tool, in breach of clause 12 of the Australian Government Agencies Privacy Code, which requires a PIA for all high-privacy risk projects,” said the OAIC.

“The AFP also breached Australian Privacy Principle (APP) 1.2 by failing to take reasonable steps to implement practices, procedures and systems in relation to its use of Clearview AI to ensure it complied with clause 12 of the code.”

Despite being called out by the information security watchdog, the AFP continued to communicate with Ton-That, showing an interest in adopting the software or similar facial recognition technology once more in the future.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.