10 May 2026

When AI Dunnit.


AI is being promoted as a tool to reduce human error in criminal investigations and healthcare but, I assert AI creates a serious harm by its very nature; AI cannot be held accountable and accountability is how we mere humans fix mistakes for fear that we will be humiliated, be disciplined, lose our jobs - none of this applies to AI who merrily trots along even when people are harmed. Further, the real benefit of accountability is not punishment but, rather, preventing the same mistakes in the future and how do we do that with AI?

Angela Lipps, a grandmother from Tennessee, was falsely identified by the facial recognition software (FRT) Clearview AI, as part of a bank fraud scheme in Fargo, North Dakota. Angela was living a quiet life, caring for her family when she was arrested, jailed first in Tennessee and then in Fargo for almost six months until she was released. By then she was traumatized and had lost her home. The Fargo police chief Zibolski said, “We’re happy to acknowledge when we make errors, and we’ve made a few in this case, for sure.” His happiness is unlikely to be shared by Angela, and the promise of an an 'overhaul' of its AI policy shouldn't hide the fact that no one was held responsible for the harm to Angela - a vague wave at AI is not the same as true accountability.

Angela's false arrest is not unique; there have been many documented false arrests. Harm from errors of false positive FRT, like in the case of Angela are one problem, but what about false negatives when a true criminal is let go - who knows how many times that has happened unless the are finally apprehended and an analysis is done showing FRT was inaccurate. Research also shows that AI is "more prone to false positive errors when applied to people of color."

Police officers are trusting algorithms that they did not create and, quite frankly, don't understand. When reasons for false positives come to light, such as low image resolution, officers can use this as a warning but, how low is too low and what about people who aren't white, when is FRT reliable? I obviously have no answers, only questions and a discomfort with people being harmed only to have people in power vaguely wave at an algorithm rather than holding someone responsible but, who can they hold responsible?

The use of facial recognition is growing not just because it *may* help correct errors (while certainly engaging in errors) but because it's a big money maker, so the answers of accountability matter:

"The global face recognition market was almost nine billion dollars in 2025, with projected growth to over 30 billion by 2034. Over a third of this market is in the U.S., but there is wide adoption of FRT around the world... Ten percent of U.S. police departments use FRT. The NYPD made 2,878 arrests resulting from FRT in the first five years of its use. The Metropolitan Police in London report 100 arrests using FRT in conjunction with mounted security cameras, including a suspect accused of kidnapping. Police in New Delhi used FRT to identify almost 3,000 missing children, and FRT has been used to identify refugee children who have been separated from their family. The National Center for Missing & Exploited Children (NCMEC) has used a tool called Spotlight, which makes use of FRT, to identify children who are victims of sex trafficking. In 2023, the FBI worked with NCMEC to identify or arrest 68 suspects of trafficking."

AI in healthcare is also big business, according to a 2025 report by Research Insights: "The global AI In Healthcare Market size is projected to be valued at USD 26.6 Billion in 2024 and reach USD 187.7 billion by 2030."

AI is used in many clinical tools and embedded in medical devices - it's the latter situation that gives rise to this story:

"In June 2022, a surgeon inserted a small balloon into Erin Ralph’s sinus cavity at a hospital in Fort Worth, Texas. According to a lawsuit filed by Ralph, Dr. Marc Dean was employing the TruDi Navigation System, which uses AI, to confirm the position of his instruments inside her head.

The procedure, known as a sinuplasty, is a minimally invasive technique to treat chronic sinusitis. A balloon is inflated to enlarge the sinus cavity opening, to allow better drainage and relieve inflammation.

But the TruDi system “misled and misdirected” Dean,.. A carotid artery – which supplies blood to the brain, face and neck – allegedly was injured, leading to a blood clot...After Ralph left the hospital, it became apparent that she had suffered a stroke. The mother of four returned and spent five days in intensive care [and] a section of her skull was removed “to allow her brain room to swell.” She finds it, "hard to walk without a brace and to get my left arm back working, again.”

Who is to blame?

Matt Baxter, Director, Professional Liability, states, “From an insurance standpoint, AI is not really changing the exposure, because the liability still stands with the healthcare professional,” Baxter said. “They still have the same responsibility, whether they are using AI or not, to make sure the information is correct.”

One group of researchers cited the concern that puts who is responsible in question, because AI is a “black box”, "with no way to understand the AI's algorithm. This is problematic because patients, physicians, and even designers, do not understand why or how a treatment recommendation is produced by AI technologies. … Due to the black box feature, medical AI systems might make incomprehensible mistakes."

So, the doctor who does not understand the algorithm is held responsible for AI mistakes and, worse, holding him/her liable does nothing to protect the next patient from this algorithm.

Mistakes are common so the question of responsibility is crucial: "A new study from researchers at Stanford and Harvard found that even today’s best artificial intelligence (AI) models make serious errors in a significant portion of medical cases … with the top-performing AI models producing 12 to 15 errors per 100 cases and the worst-performing models making mistakes in 40 out of 100 cases."

Would suing the AI company responsible make things safer? Maybe the loss of money would make them revisit their tech and pull those that aren't safe.

Whatever the answer, the question must be asked: when, not if, AI makes a mistake, how are the right people held accountable and what is being done to ensure the mistake doesn't happen again?

No comments:

Post a Comment

Welcome. Please feel free to comment.

Our corporate secretary is notoriously lax when it comes to comments trapped in the spam folder. It may take Velma a few days to notice, usually after digging in a bottom drawer for a packet of seamed hose, a .38, her flask, or a cigarette.

She’s also sarcastically flip-lipped, but where else can a P.I. find a gal who can wield a candlestick phone, a typewriter, and a gat all at the same time? So bear with us, we value your comment. Once she finishes her Fatima Long Gold.

You can format HTML codes of <b>bold</b>, <i>italics</i>, and links: <a href="https://about.me/SleuthSayers">SleuthSayers</a>