NewsFacial recognition failures lead to wrongful arrests in the U.S.

Facial recognition failures lead to wrongful arrests in the U.S.

Facial recognition software has mistakenly identified at least eight Americans, leading to their arrest, reports the "Washington Post."

AI errors. Eight people wrongly arrested
AI errors. Eight people wrongly arrested
Images source: © Getty Images | 2023 John Keeble

In the United States, at least eight people have been wrongfully arrested due to incorrect identification by facial recognition software. As reported by the "Washington Post," police in the U.S. use artificial intelligence technology to detain suspects, often without additional evidence.

Problems with identification

The newspaper analyzed data from police reports, court records, and interviews with officers, prosecutors, and defense attorneys. The findings suggest that the issue may be significantly larger, as prosecutors rarely disclose the use of AI, and the law requires it only in seven states. The total number of wrongful arrests caused by AI errors remains unknown.

In the eight cases that have been identified, police failed to take basic investigative actions, such as checking alibis, comparing distinctive features, or analyzing DNA and fingerprint evidence. In six cases, the suspects' alibis were ignored, and in two, evidence contradicting the police's assumptions was overlooked.

In five cases, crucial evidence was not gathered. The "Washington Post" cites an example of an individual arrested for attempting to cash a forged check, where the police did not even check the suspect’s bank accounts. Physical characteristics of suspects that contradicted the AI's identification were ignored three times, such as in the case of a woman in an advanced stage of pregnancy accused of car theft.

In six cases, witness statements were not verified. An example is a situation where a security guard confirmed the identity of a suspect in a watch theft, despite not being present at the event.

Concerns about technology

Facial recognition software works almost perfectly in laboratory conditions, but its effectiveness in practice remains questionable. Katie Kinsey from NYU notes the lack of independent tests verifying the accuracy of the technology on blurry surveillance images. Research by neuroscientists at University College of London shows that AI users may blindly trust its decisions, leading to inaccurate judgments.

The "Washington Post" emphasizes that trust in AI systems can prevent proper assessment of situations, which is particularly dangerous in the context of the justice system.

Related content

© essanews.com
·

Downloading, reproduction, storage, or any other use of content available on this website—regardless of its nature and form of expression (in particular, but not limited to verbal, verbal-musical, musical, audiovisual, audio, textual, graphic, and the data and information contained therein, databases and the data contained therein) and its form (e.g., literary, journalistic, scientific, cartographic, computer programs, visual arts, photographic)—requires prior and explicit consent from Wirtualna Polska Media Spółka Akcyjna, headquartered in Warsaw, the owner of this website, regardless of the method of exploration and the technique used (manual or automated, including the use of machine learning or artificial intelligence programs). The above restriction does not apply solely to facilitate their search by internet search engines and uses within contractual relations or permitted use as specified by applicable law.Detailed information regarding this notice can be found  here.