In Part One, we explored how responsible AI requires intentional design, transparency, and oversight. We examined how AI systems developed without ethics at the core can reinforce bias, obscure accountability, and make harmful decisions at scale. But what happens when this abstract concern becomes deeply personal?
Imagine an AI algorithm deciding your car is worthless before a single human ever lays eyes on it. That’s exactly what happened to me and my fiancé after a recent car accident. A real-world example of how AI, when misused in high-stakes environments like insurance, can make costly, emotionless decisions that defy logic, harm trust, and leave people to clean up the mess.
Let me walk you through our experience, what it reveals about the dark side of AI in critical systems, and what you can do to protect yourself and advocate for better.
Experienced and written by AnitaB.org Copywriter Anna Seibold.
When AI Gets It Wrong: My Personal Insurance Nightmare
My fiancé and I were recently in a car accident. Thankfully we weren’t physically hurt, but his car, a rare 1-of-25 model he’d worked hard to upgrade and maintain, took a hit. We submitted a claim and instead of having us upload photos, insurance had my fiancé draw on a picture of a car where the damage was. The car went into storage for the holiday weekend, waiting to be assessed. Then came the call.
On Tuesday morning, before the car had even been looked at in person or in photos, his insurance agent said they were ready to total it and part it out. The decision had been made entirely by their AI system. No human review. No discussion. Just some scribbles on a diorama of a car and a snap judgment. The truth was that the car wasn’t even close to totaled. It needed about $1,600 in repairs—a control arm, new tires, and a fender extender. That’s it.
To have an AI decide the fate of such a rare and meaningful car was more than frustrating. It was a wake-up call. When artificial intelligence is left to make high-stakes decisions unchecked, the consequences can be personal, costly, and deeply unfair.
The Bigger Picture: AI in Insurance and Other Critical Systems
Our experience wasn’t just a fluke; it’s part of a growing pattern. Across industries, AI is being used to speed up decisions in everything from insurance claims to healthcare diagnoses, hiring, and loan approvals.
In theory, AI makes these systems more efficient. In reality, too many decisions are made without human review, context, or accountability. Insurance companies use AI to determine fault, assess damage, and recommend payouts.
And it’s not just insurance. On Reddit and other forums, users are sharing stories of job applications being auto-rejected, medical claims denied, and loans declined because of AI systems prioritizing speed over nuance and accuracy. These “glitches” are examples of how AI, when misapplied, can make flawed, high-stakes decisions without room for human judgment.
Why This Erodes Trust in AI
Stories like mine—and so many others—raise a crucial question: Who’s responsible when artificial intelligence gets it wrong? There’s a famous IBM slide from 1979 that still resonates today:
“A computer can never be held accountable. Therefore, a computer must never make a management decision.”
And yet, we’re letting algorithms make more and more of them. The problem isn’t just that AI systems can be wrong, it’s that they often operate in black boxes. We don’t know what data they’re trained on, what assumptions they’re making, or how they’re weighing outcomes. When decisions are made without transparency or human oversight, the results can be not just inaccurate but actively harmful.
As AI ethics researcher Dr. Timnit Gebru has said, “AI systems are only as good as the data and assumptions they’re built on. Without transparency and oversight, they can make decisions that are not only wrong but harmful.”
When AI scams or mishaps go unchecked, they break processes and trust. And once trust is gone, it’s hard to rebuild.
How to Protect Yourself from AI Mishaps
AI isn’t going away, but that doesn’t mean we have to accept every automated decision at face value. When it comes to AI scams or flawed outputs, you have more power than you think. Here are a few ways to protect yourself:
- Always ask for human review. If an AI-generated decision doesn’t make sense—whether it’s an insurance quote, job rejection, or denied claim—request escalation. You have the right to a second look.
- Document everything. Keep records of conversations, photos, receipts, and timestamps. When you’re up against a system that runs on data, having your own can make all the difference.
- Know your rights. Many industries using AI are still legally required to offer human intervention or provide clear explanations. If you’re unsure, ask for documentation or clarification in writing.
- Speak up when systems fail. Share your experience, file a formal complaint, and ask for accountability. If you’re in a position to influence your company’s AI practices, do it.
We can’t control how every company uses AI, but we can learn to navigate it with more awareness, demand better standards, and protect ourselves when the system gets it wrong.
Navigating AI with Confidence, Not Fear
It’s easy to feel overwhelmed or even powerless in the face of rapid AI adoption. But the solution isn’t to reject artificial intelligence entirely. It’s to engage with it more critically and consciously. As Dr. Fei-Fei Li, Co-Director of the Stanford Human-Centered AI Institute, put it:
“We need to move from fear of AI to fluency in AI—understanding how it works, where it fails, and how we can shape it to serve society.”
That shift from fear to fluency starts with curiosity, awareness, and collective action. Here’s how you can lead the way:
- Stay informed about AI in your industry. Whether you work in product, data science, or policy, understanding how AI is used in your domain empowers you to ask smarter questions and spot red flags.
- Push for ethical AI at work. Advocate for transparency, human oversight, and responsible procurement. Use your seat at the table to ask: “Who might this harm?” and “Whose voice is missing?”
- Mentor others. Help your peers, mentees, and teams develop critical AI literacy. Share your own experiences navigating these systems, both the wins and the lessons.
We need more leaders who can ask hard questions and center humanity in tech. The more fluent we are, the better equipped we’ll be to use AI with purpose, not just power.
Responsible AI Starts with Us
My story started with a totaled car, but it’s really about something much bigger: the hidden costs of unchecked automation. When artificial intelligence makes decisions without human oversight, people get hurt. Value is lost. Trust is broken. But it doesn’t have to be this way.
We have the power to demand better systems; ones built on accountability, transparency, and respect for the people they’re meant to serve. Whether you’ve already had a brush with AI gone wrong or you’re just starting to ask tougher questions, your voice matters.
Donate to AnitaB.org to fuel responsible innovation, led by those who refuse to accept “just the way it is” as good enough.
Read more posts from the thread Responsible AI and the Environment: An Indigenous Perspective with Dr. Kai Two Feathers Orton