Ukraine war: Controversial face ID company enters the fray

Kyiv, Ukraine - The information war continues in the third week of Russia's invasion of Ukraine, but a new tool has entered the battlefield.

Knowing who's who is of utmost importance in a war zone.
Knowing who's who is of utmost importance in a war zone.  © IMAGO / Panthermedia

As reported by Reuters, Ukraine's Defense Ministry started working with Clearview AI and its facial recognition database.

According to a statement from Clearview AI's CEO, Hoan Ton-That, the tech was offered free of charge to Ukraine as a special operation, and Russia does not have access to the company's services.

A former diplomat and now advisor to Clearview AI, Lee Wolosky, said that one main use for the Face ID service would be at checkpoints, to vet people moving in Ukraine.

Other uses could be to check IDs of Russian soldiers, work against the spread of misinformation, and to identify citizens and soldiers killed during the war.

Ton-That claimed that the database at Ukraine's disposal has over two billion images from Russia's Facebook equivalent, VKontakte, which is a full 20% of all images on the platform.

The Ukrainian Defense Ministry is expected to deploy Clearview AI this week, but Ton-That said it is unclear exactly how the technology will be used.

Facial recognition has pitfalls

Surveillance cameras could be misused with databases like Clearview AI's.
Surveillance cameras could be misused with databases like Clearview AI's.  © Collage: IMAGO / Michael Gstettenbauer, YAY Images

This development was met with skepticism in some quarters.

Albert Fox Cahn, head of the New York City-based Surveillance Technology Oversight Project warned, "We’re going to see well-intentioned technology backfiring and harming the very people it’s supposed to help."

One problem common to all face ID services which could come up in Ukraine is the potential to misidentify people.

The Gender Shades project, which studied facial recognition software from tech giants like Amazon found that racial bias is a huge issue. The face ID software the project analyzed were up to 34% more likely to misidentify people of color, and the bias hit women of color hardest. Given the number of Black students currently trying to flee the country, that's not an insignificant issue.

Even if the technology works as intended, it is a privacy rights nightmare, and the company is in serious trouble for scraping images off of social media platforms without notifying users or asking their permission.

According to Reuters, Clearview AI is currently in hot legal water in multiple countries, including the US, and has even been banned from collecting images from users in Australia and the UK.

Bigger tech companies have already dipped their toes into the boiling pool of facial recognition, got scalded, and backed off. Meta, for example, recently was sued by Texas for gathering users' biometric data, including images of their faces, and has discontinued its face scan database.

The IRS also dabbled in mandatory face ID, before user and expert backlash made the government agency backtrack.

The introduction of facial recognition technology to a war zone carries some serious risks and it remains to be seen how effective the new tool will be, or how it is used or misused.

Cover photo: IMAGO / Panthermedia

More on the topic Tech: