Facial recognition failures lead to wrongful arrests in the U.S.
Facial recognition software has mistakenly identified at least eight Americans, leading to their arrest, reports the "Washington Post."
In the United States, at least eight people have been wrongfully arrested due to incorrect identification by facial recognition software. As reported by the "Washington Post," police in the U.S. use artificial intelligence technology to detain suspects, often without additional evidence.
Problems with identification
The newspaper analyzed data from police reports, court records, and interviews with officers, prosecutors, and defense attorneys. The findings suggest that the issue may be significantly larger, as prosecutors rarely disclose the use of AI, and the law requires it only in seven states. The total number of wrongful arrests caused by AI errors remains unknown.
In the eight cases that have been identified, police failed to take basic investigative actions, such as checking alibis, comparing distinctive features, or analyzing DNA and fingerprint evidence. In six cases, the suspects' alibis were ignored, and in two, evidence contradicting the police's assumptions was overlooked.
In five cases, crucial evidence was not gathered. The "Washington Post" cites an example of an individual arrested for attempting to cash a forged check, where the police did not even check the suspect’s bank accounts. Physical characteristics of suspects that contradicted the AI's identification were ignored three times, such as in the case of a woman in an advanced stage of pregnancy accused of car theft.
In six cases, witness statements were not verified. An example is a situation where a security guard confirmed the identity of a suspect in a watch theft, despite not being present at the event.
Concerns about technology
Facial recognition software works almost perfectly in laboratory conditions, but its effectiveness in practice remains questionable. Katie Kinsey from NYU notes the lack of independent tests verifying the accuracy of the technology on blurry surveillance images. Research by neuroscientists at University College of London shows that AI users may blindly trust its decisions, leading to inaccurate judgments.
The "Washington Post" emphasizes that trust in AI systems can prevent proper assessment of situations, which is particularly dangerous in the context of the justice system.