PimEyes: Free face-search tool can track down all the images of you on the internet

The new facial recognition search engine returns photos you might not remember ever being in

Green face recognition markings on the face of a short-haired young woman in an airport building. Cape Town, South Africa. May 2019. Getty Images
Powered by automated translation

At the beginning of this year, alarm bells began to sound over the growing use of facial recognition technology. One company, Clearview, was revealed to have assembled a searchable database of three billion images collected from the internet, and was selling access to law enforcement agencies.

In a test I ran, it unearthed photographs I'd never seen before, including one of me with people I don't even remember meeting

Few of us had seen it in action, as it was only available to organisations willing to pay for it. Its effect on the real world, and the dangers of its flaws, were a mere possibility being highlighted by privacy campaigners.

Last week, that changed. A new facial recognition search engine, PimEyes, now gives anyone with internet access a freely accessible demonstration. Upload a photograph of someone, and it will show you all the others of that person in its database. Pay $10 (Dh37), and one can access links to pages of the photos. It has never been easier to put a name to a face.

Meanwhile, two companies working at the forefront of facial recognition – Amazon and IBM – have publicly stated their concerns over misuse of the technology. Amazon banned use of its software by law enforcement for a year, while IBM has backed out of the game completely.

At first glance, PimEyes seems innocuous enough. Uploading a photo of yourself returns other photos of you from its database. (Unlike Clearview, it does not show photos from major social media websites, most notably Facebook.) The results page looks a little like a Google image search for your own name. But crucially, even without a name, it finds you with unsettling accuracy. In a test I ran, it unearthed photographs I had never seen before, including one of me with people I don’t even remember meeting. The AI memory of me is better than mine – and accessible to anyone with a photograph of my face. 

Rhodri Marsden tries out PimEyes for himself, and is surprised with the results.
Rhodri Marsden tries out PimEyes for himself, and is surprised with the results.

PimEyes is evidently the little brother of more comprehensive services operated by Clearview or Amazon. But it provides a clear illustration of not only how the technology could be misused, but also the lack of transparency over its development. The photos in its database are publicly available on the web, but there is concern their surreptitious collection has weaponised them.

When Clearview was challenged on this by the American Civil Liberties Union, company lawyer Tor Ekeland replied: “Clearview AI is a search engine that uses only publicly available images … It is absurd that the ACLU wants to censor which search engines people can use. The First Amendment forbids this.” 

With the technology growing powerful, real-time facial recognition – the ability of a camera to register a face and match it to an identity – has become a reality, almost under the radar. Amazon and IBM are household names, but other companies jostling for pole position in this space are not.

Companies with large ongoing surveillance contracts include Idemia (France), Tech5 (Switzerland) and AllGoVision (India). In a recent interview, ACLU lawyer Matt Cagle expressed concern over this. “The public is largely in the dark about the state of the surveillance vendor market,” he said. “You have corporate entities making policy decisions without democratic transparency.” 

On one hand, there is growing concern about the threat posed to our privacy by a new and awesome technology. On the other, there is the threat of too much trust being placed in it by law enforcement, such that its inaccuracies and biases – particularly against people of colour – result in miscarriages of justice.

Facial recognition systems have been predominantly trained on white male faces, after all. One study last year found that Amazon Rekognition had an accuracy of only 68.6 per cent when identifying faces of women of colour. It may be no coincidence that Amazon and IBM have backtracked in the same month dominated by the Black Lives Matter protests in the US and around the world.

Amazon has been selling Rekognition to law enforcement since at least 2018, but now it advocates “stronger regulations to govern the ethical use of facial recognition technology”.

IBM’s chief executive Arvind Krishna also emphatically rejects the technology that his own company spent years developing. “IBM no longer offers general purpose facial recognition or analysis software,” he said.

“[It] firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms.”

Critics have pointed out that IBM was trailing its competitors and may have been seeking an exit from the space anyway, but it is still a powerful warning either way. 

It is not clear whether the facial recognition juggernaut can be halted. It is not even known if cease and desist orders issued by the likes of Facebook to prevent image collection are having any effect. Human rights organisations continue to bring legal challenges. But the technology's implications are now becoming clearer to the public, and its use is posing a growing ethical question for businesses and governments alike.