AI · Civil Rights · Accountability
Essay · AI Governance · Facial Recognition

The Algorithm Said It Was Her. That Was Enough.

Angela Lipps spent more than five months in jail for a crime committed over a thousand miles away. She lost her home, her car, and her dog. The facial recognition software was wrong. Basic investigative steps were never taken.

By Anjali Bindra Patel

On July 14, 2025, Angela Lipps was at her home in Elizabethton, Tennessee, babysitting four young children. U.S. Marshals arrived with guns drawn. She was under arrest for bank fraud committed in Fargo, North Dakota, a state she says she had never been to and had no connection to whatsoever.

Here is what had happened. The West Fargo Police Department, investigating a series of bank fraud cases involving a fake U.S. Army military ID, used Clearview AI, a facial recognition tool with a database of billions of photos scraped from the internet. The software identified Lipps as a potential suspect based on images from the fraudulent ID and bank surveillance footage. West Fargo shared that report with Fargo police. According to Lipps's attorneys, no one then verified whether she had ever traveled to North Dakota. No one checked her alibi. No one asked where she was at the time of the crimes. Bank records that would have cleared her immediately were available the entire time. Nobody asked for them.

A warrant was issued. Marshals were sent. And Angela Lipps, a 50-year-old mother of three and grandmother of five who had spent nearly her entire life within a hundred miles of her Tennessee home, was taken away in front of the children she was watching.

She spent more than three months in a Tennessee jail. She was extradited to North Dakota at the end of October. Charges were dismissed on Christmas Eve after her attorney in Fargo produced bank records proving she was in Tennessee during the alleged crimes. She was released from custody that day.

By then, she had lost her home, her car, and her dog.

The story was largely unknown outside of local North Dakota media until March 14, 2026, when a West Fargo man created a GoFundMe on her behalf. It went viral. National outlets picked it up. CNN covered it on March 29. The Fargo police chief held a press conference and acknowledged "a couple of errors" in the investigation. He stopped short of a direct apology and said the case remains open, with charges possibly subject to refiling.

Angela Lipps spent more than five months in custody for a crime she had nothing to do with, and the world didn't find out for eight months because no one with a platform knew her name until a stranger decided to raise money for her.

What This Case Is Really About

This is not primarily a story about one bad algorithm. It is a story about what happens when institutions stop doing the work that institutions are supposed to do, and treat a tool's output as a conclusion rather than a starting point.

Facial recognition software produces a probability match. It says this face has features similar to this other face. That is useful information, the same way a witness tip is useful information. It is not proof. It is not a warrant. It is a reason to investigate further, not a reason to stop investigating.

Lipps's attorneys put it plainly after the Fargo police press conference: "Officers knew that Angela was a Tennessee resident, and we have seen no investigation by officers to determine whether she traveled to or was in North Dakota at the time of the bank thefts. Instead, an officer used AI facial recognition as a shortcut for basic investigation, resulting in an innocent woman being detained and transported halfway across the country to answer for charges that she had nothing to do with."

The Fargo police chief, to his credit, acknowledged that his department should not have been relying on a neighboring agency's AI system and has since prohibited its use. He said officers would now submit all facial recognition identifications for review. These are real changes. They are also changes that should have been in place before Angela Lipps ever heard the words "North Dakota."

The algorithm said it was her. Nobody asked whether it was right. That is not just an AI failure. It is a human failure enabled by AI.

Who Gets Flagged

Angela Lipps is a white grandmother from rural Tennessee, and her case is receiving significant national attention. That matters, and it is worth saying directly, because the documented history of facial recognition errors shows a clear pattern: the people most likely to be misidentified are Black women, followed by other people of color, followed by women generally. The technology performs worst on the people who were already most vulnerable to wrongful arrest before AI entered the picture.

A 2019 study by the National Institute of Standards and Technology found that facial recognition algorithms misidentified Black and Asian faces at rates significantly higher than white faces across most of the systems tested. The ACLU has documented multiple cases of Black men wrongfully arrested based on facial recognition matches, including Robert Williams in Detroit, Michael Oliver in New Orleans, and Nijeer Parks in New Jersey, each of whom spent time in custody before errors were caught.

Angela Lipps's case is not an outlier. It is a window into a system that has been operating this way for years, mostly without national coverage. The window is open right now because her story went viral. The question is what we do while we can see through it.

What Accountability Actually Looks Like

A policy change after the fact, announced under public and legal pressure, is not accountability. Accountability is what happens before the arrest. It is a requirement that facial recognition matches be verified through independent investigation before anyone loses their liberty. It is an active obligation to seek alibi evidence, not a passive choice to wait for a defense attorney to produce it. It is a clear, enforceable standard for what constitutes sufficient evidence to deprive a person of their freedom, and a clear prohibition on treating an algorithm's output as meeting that standard on its own.

Ian Adams, an assistant professor of criminology at the University of South Carolina, told CNN that police are adopting AI rapidly and largely on the basis of vendor promises, with little independent evidence of efficacy. "The overwhelming amount of the time," he said, "it's not just a technology problem, it's a technology and people problem. We get nightmare scenarios when we don't have people doing what they're supposed to do, with technology that they're using inappropriately."

That framing is important. The answer is not to stop using AI in law enforcement investigations. It is to use it as what it actually is: one data point among many, subject to verification, never sufficient on its own to deprive a person of their liberty. Every jurisdiction using facial recognition technology should be required to have that standard written down, reviewed, and enforced before the next arrest warrant is issued on the basis of an algorithm's guess.

The Part That Should Stay With Us

I think about this case through the lens of what I do every day, which is work inside an institution and think about how institutions make decisions about people. The pattern here is one I recognize: a system under pressure to be efficient, reaching for a tool that promises faster answers, and in the process removing the friction that was doing the actual work of protecting people.

The slowness of building a real case before depriving someone of their freedom is not inefficiency. It is the point. The inconvenience of verifying an alibi is not a bug in the system. It is the protection. When we use AI to skip those steps, we are not improving the process. We are removing the part of it that was keeping innocent people out of jail.

Angela Lipps is home now. The charges were dismissed, which her attorney correctly notes is not the same as being cleared. Fargo police say she remains a person they are investigating. She spent more than five months in custody for a crime she had nothing to do with, lost everything she owned in the process, and the world only found out because a stranger in West Fargo decided she deserved better.

That last part might be the most important thing in the whole story. The system didn't surface this. A person did. That should tell us something about how much we can rely on the system to catch its own mistakes, and how much we still need human beings paying attention.

Speaking & Consulting

Anjali writes and speaks on AI governance, civil rights, and institutional accountability. If you're interested in bringing her to your organization or event, she'd love to hear from you.

Get in touch →

Anjali Bindra Patel

Chief Diversity Officer at Georgetown University Law Center. Attorney. Author of Humanity at Work (#1 Amazon Bestseller). TEDx Speaker. She writes and speaks at the intersection of AI governance, civil discourse, and institutional trust. Follow on X →

Views expressed are her own and do not represent any employer or institution.

More Essays

AI · Employment · Equity

The Rejection You Never Saw Coming

AI Governance · Civil Liberties

The Consent Gap: What AI Governance Is Still Getting Wrong

AI · Work · Human Skills

They Called Them Soft Skills. Turns Out They're the Ones AI Can't Touch.

View All Articles →