Search About Newsletters Donate
We’re celebrating our 10th birthday!

Keep a bright light shining on the darkest corners of the criminal justice system. Become a member of The Marshall Project today.

Closing Argument

What Happens When Your Social Media Photos End Up in the Hands of Police

Law enforcement agencies, from police departments to ICE, are using facial recognition, sometimes leading to wrongful arrests.

A photo shows hands holding a mobile phone, with a pictures of Hoan Ton-That, a multi-racial man wearing a white outfit on the screen.
Hoan Ton-That, CEO of Clearview AI, demonstrates the facial recognition software using his own face in 2022.

This is The Marshall Project’s Closing Argument newsletter, a weekly deep dive into a key criminal justice issue. Want this delivered to your inbox? Subscribe to future newsletters here.

Facial recognition software is increasingly ubiquitous in modern life, and odds are good that you have at least one social media account or mobile device that uses a version of this technology. Just yesterday, my phone’s photos app sent me a slideshow with pictures of me and a close friend over the years, all selected by artificial intelligence and set to upbeat music.

But the technology is rapidly expanding beyond novelty and into public life. Retailers are increasingly using it to monitor shoplifting. Madison Square Garden, New York City’s famous venue, has also recently come under scrutiny for using facial recognition to keep out lawyers involved in lawsuits against the arena.

Police are using it too. Hoan Ton-That, CEO of facial recognition firm Clearview AI, recently told the BBC that U.S. police have completed more than 1 million photo searches on the company’s platform.

Clearview’s technology pairs facial recognition algorithms (which many companies offer) with a database of over 30 billion photos scraped from the internet — mostly from social media — without the consent of those photographed. The firm primarily markets the tool to local law enforcement.

Last week, the New York Times published the story of Randal Quran Reid, who was pulled over in Georgia in November 2022, and arrested for stealing designer handbags in Louisiana. Reid had never been to Louisiana and was the apparent victim of a mistaken ID by Clearview’s technology. Nevertheless, he spent six days in jail, and thousands of dollars on legal fees before it was sorted out.

“Imagine you’re living your life and somewhere far away says you committed a crime,” Reid told the Times. “And you know you’ve never been there.”

Perhaps the most troubling detail in Reid’s case is that nowhere in the documents used to arrest him — including the warrant signed by a judge — does it state that facial recognition was used.

Historically, these algorithms have performed worse on darker-skinned people than on White people. Others have argued that the technology is rapidly improving, and that racial bias concerns may be overblown.

Last year, Wired Magazine, which has closely followed the rise of policing technology, told the stories of three other men — all Black — who were wrongfully arrested after false IDs. For all three, the arrests caused serious financial and emotional burdens. “Once I got arrested and I lost my job, it was like everything fell, like everything went down the drain,” Michael Oliver told Wired.

Two of the cases took place in Detroit, where police have subsequently raised the standards for using facial recognition technology in criminal investigations. Police officials, like then-Detroit Police Chief James Craig, typically defend its use by noting that officers are only supposed to rely on the software to generate leads in an investigation — and not to determine who they arrest. But as a recent report from the Georgetown Law Center on Privacy and Technology noted in reference to this technology: “In the absence of case law or other guidance, it has in some cases been the primary, if not the only, piece of evidence linking an individual to the crime.”

False identification in the justice system is a problem that long predates facial recognition technology. Just last month, Florida man Sidney Holmes — who spent more than 30 years behind bars — was exonerated after prosecutors determined that he was likely misidentified by an eyewitness in a lineup. According to the legal aid organization Innocence Project, eyewitness misidentification is the “leading factor” in wrongful convictions.

As a 2022 study published in the Duquesne Law Review noted: “Wrongful convictions can occur when police use either of these identification methods without precautions.”

In at least one case, the technology has helped prove the innocence of someone accused of a crime. That case, chronicled in detail by The New York Times, led to Clearview making its product available to public defenders, in order to “balance the scales of justice,” as the company’s Ton-That put it.

But critics said they are highly skeptical that the technology will ever have as much power to undo bad arrests as it does to generate them.

Internationally, the European Union, Australia, and Canada have all determined that Clearview’s technology violates their privacy laws. The company also had to agree not to sell its database to other companies under the terms of a 2022 settlement with Illinois for violating state privacy laws.

Meanwhile, in Russia and China, facial recognition has increasingly become a key instrument of state control. The U.S. government has also been engaged in its own research around the technology for law enforcement purposes, including the FBI and Immigration and Customs Enforcement. That includes a new app that migrants seeking asylum are now required to use to track their requests.

A 2021 government watchdog report found that the “use of facial recognition technology is widespread throughout the federal government, and many agencies do not even know which systems they are using,” according to The Washington Post. There have been congressional hearings and proposed legislation on the question since that time, but no concrete changes.

Jamiles Lartey Twitter Email is a New Orleans-based staff writer for The Marshall Project. Previously, he worked as a reporter for the Guardian covering issues of criminal justice, race and policing. Jamiles was a member of the team behind the award-winning online database “The Counted,” tracking police violence in 2015 and 2016. In 2016, he was named “Michael J. Feeney Emerging Journalist of the Year” by the National Association of Black Journalists.