Dead soldiers in Clearview AI (Revised June 15th)

Ukrainian flag

The war between Russia and Ukraine rages on. One method for the Ukrainian resistance to raise awareness of the number of dead Russian (and Ukrainian) soldiers is to use Clearview AI, the facial network services company, which can detect faces and connect them to, for instance, social media profiles. It’s also a method for the Ukrainian Ministry for Digital Transformation and five other Ukrainian agencies to detect dead soldiers scattered on and around battlefields.

On January 6th 2021, two weeks before the inauguration of Joe Biden as president, we could witness the attack on the Capitol Hill in Washington D.C. Afterwards, authorities could tap into the network services of Clearview AI and, quite easily, detect hundreds of participants in these illegal activities. Many of them have been prosecuted and some sentenced to jail. Clearview AI has amassed billions of photos on the public Internet for years, rendering them extremely able to pinpoint human beings if you have a Clearview AI account. The image I have of you will be matched against this gigantic image database and probably tell me it is you, even if we haven’t met for years (or ever).

The podcast Click Here has a good episode on this and how it’s used in Ukraine. On the one hand employees of the Ministry of Digital Transformation use proper Clearview AI accounts, thus being able to match most images of dead soldiers with real people, even if years have passed, the deceased have no eyes and parts of the faces are distorted. They inform both Ukrainian and Russian relatives and tell them where to retrieve the body.

More problematic is the fact that groups affiliated with the Ukrainian IT Army appear to use an account too, also informing Russian relatives, though in a(n) (even) more condescending and hostile way. Russian relatives are probably feeling neither gratitude, nor appreciation for suddenly receiving images of dead bodies, especially with gloating or condescending messages.

Even if I remain a skeptic, there are some reasons for using this kind of technology.

  1. War is gruesome and disgusting. People die and preferably they should be identified. Computers and programs can help here and make this much easier and faster than humans.
  2. War crimes are committed and should be investigated. Technology can help here too.
  3. Russian authorities are not the ones to inform relatives that sons have died in accidents, wars or “special military operations”. They can lie and this is where technology can help tell otherwise.
  4. Identification of people is not dependant on favourable relations with another nation’s authorities. Identification can be made without another nation’s consent, because their citizens are in databases elsewhere anyway.

There are more cons, however, some really strong.

  1. These databases will be targeted by states, state-sponsored organizations, rogue organizations and individuals.
  2. States will strive to acquire similar databases in order to identify anyone anytime anywhere.
  3. To presume that Russian relatives will feel anger at their government and/or gratitude towards Ukrainians for sending images of their dead ones is really bad. Rather, it can galvanize public support for Russian authorities.
  4. The hope for grieving mothers’ movements to direct their anger at the Russian regime is likewise bad. Why should they, especially if there’s anonymous messages from foreigners telling them they are blind to facts and supporting an evil leader?
  5. Disinformation warfare 1 – whom to believe? A random person from another country claiming my relative is dead or the national authorities?
  6. Disinformation warfare 2 – I can assert you to be a traitor and use this tool to prove it.
  7. Disinformation warfare 3 – can “photoshopped” images be run in Clearview AI?
  8. Disinformation warfare 4 – this kind of technology can trigger an even worse response and method of war, spiralling further down.
  9. Misidentification of individuals happen in every other computer system, so why shouldn’t it happen with Clearview AI.
  10. Gathering of images is done without consent or information and for how long will they be kept?

Similar systems in use today are the combination of Sky Net and Integrated Joint Operations Platform in China. They are very creepy and should probably be banned altogether, because the more of this technology there is, the more it will be used. Based on a decision in May, Clearview AI is no longer allowed to sell its database to private businesses in the US and to Illinois state agaencies (for five years in the latter case). At this point, the database comprises 20 billion facial photos.

But. After all, it’s rather easy to stay emotionally detached if you’re not in Ukraine, living your life, albeit with inflation and a shaky economy. Still, the war is far away and it’s easy to say this use, weaponized use, of images is wrong. But in a different situation, with war, death, fear and suffering around me, I’d probably be doing it myself.