By Innocence Project
Photos: YouTube Screenshots
As we come to the end of Black History Month, we want to share some growing concerns on artificial intelligence (AI) — the use of which has the potential to deeply exacerbate racial disparities in policing.
Robert Williams’ harrowing experience exemplifies this danger. Mistakenly identified by facial recognition technology (FRT), the Michigan resident and father of two spent 30 hours wrongly detained for theft. This wasn’t an isolated incident. There are at least seven confirmed cases of misidentification due to the use of facial recognition technology, six of which involve Black people who have been wrongfully accused: Nijeer Parks, Porcha Woodruff, Michael Oliver, Randall Reid, and Alonzo Sawyer, along with Robert Williams.
Facial recognition technology has been shown to misidentify people of color in part because its algorithms fail to adequately distinguish facial features and darker skin tones. This, coupled with potential officer biases, creates a dangerous cocktail that can lead to misidentifications and wrongful arrests. It also bears stark parallels to past misapplications of forensic techniques like bite mark analysis.
The Innocence Project is actively challenging the misuse of AI in policing. We advocate for proactive measures like pre-trial litigation, policy interventions, and community involvement to prevent unreliable and biased AI from doing further harm.
Last year, the Biden administration issued an executive order to set standards and manage the risk of AI including a standard to develop “tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.” However, there are no federal policies currently in place to regulate the use of AI in policing.
In the meantime, Innocence Project policy advocate Amanda Wallwin tells us there are ways for concerned community members to influence and encourage local leaders to regulate the use of these technologies in their communities by local law enforcement and other agencies.