Imagine this: You’ve been working with a trusted specialist for years. They’ve prescribed a specific medication that works for you—one that keeps your condition under control and allows you to live your life. Then, out of nowhere, your pharmacist informs you that your insurance company has denied coverage for that medication. Why? Because somewhere in a server room, an AI algorithm has decided that a cheaper alternative might work just as well for you.
This isn’t science fiction. It’s happening now. AI is playing doctor—not by asking you to turn your head and cough, but by making cold, calculated decisions about your health, often without understanding the nuances of your medical history or the expertise of your physician.
The Kafkaesque Nightmare Begins
The first sign of trouble is a message from your pharmacist: “Your insurance has denied your claim.” For most people, this is the start of a maddening journey through a labyrinth of bureaucracy. You don’t fully understand why your claim was denied, but you know one thing: you need your medication. Your blood pressure spikes as frustration sets in.
You call the pharmacy, only to be greeted by an automated voice system. “Press 1 for English, 2 for Español. In a few words, tell us why you’re calling.” After navigating this maze, you finally reach a human—someone who asks for your name, date of birth, and insurance details. Then comes the dreaded response: “Your claim has been denied.”
Why? The pharmacist doesn’t know. They’re just the messenger. The denial is the result of an algorithmic decision, one that doesn’t care about your years of successful treatment or the side effects of alternative medications. It only cares about cost.
Your next call is to your doctor’s office. Surely, they can help, right? Wrong. You’re transferred to a voicemail because it’s after 3 PM. “If this is an emergency, call 911,” the recording says. You leave a message and wait. The next day, someone from the office calls back, but they’re just as baffled as you are. “Your claim was denied,” they say. “We’re not sure why.”
The doctor’s office contacts the insurance company on your behalf, only to be told that the denial was based on the AI’s determination that you haven’t tried cheaper alternatives. Never mind that those alternatives might not work for you—or worse, might cause harmful side effects. The AI doesn’t care. It’s not a doctor. It’s a cost-cutting machine.
At this point, you’re left with two choices: pay out of pocket for the medication you know works, or gamble with your health by trying the cheaper alternatives the AI recommends. If you’re like many people, you grit your teeth and pay full price, furious that your health is being held hostage by an algorithm.
This isn’t just a personal inconvenience. It’s a systemic issue. Insurance companies are increasingly relying on AI to review and deny claims, often without human oversight. According to reports, AI-driven claim denials have skyrocketed, with some estimates showing a 16-fold increase in denials. And while 90% of these denials are overturned on appeal, the damage is already done. Patients lose time, money, and sometimes their health in the process.
The consequences of these AI-driven decisions are devastating. Patients are forced to delay or forgo treatment, leading to worsening health outcomes. Families are left scrambling to cover costs, sometimes draining their savings or applying for Medicaid just to keep their loved ones alive. And all the while, the insurance companies’ bottom lines grow, as they save money by denying care.
This isn’t just about money. It’s about trust. When an AI overrides the judgment of a trained medical professional, it sends a chilling message: Your health doesn’t matter as much as our profits.
The most insidious part of this system is the message it sends to patients: If you can’t afford the medication you need, maybe you should just consider dying. After all, if the AI has decided that cheaper alternatives are “good enough,” and you can’t afford to pay out of pocket, what other choice do you have?
This dystopian reality is already here. AI is being used to make life-and-death decisions, often without transparency or accountability. And while insurance companies and their shareholders reap the benefits, patients are left to suffer the consequences.
So, ask yourself: Is death right for you? Because if we don’t push back against this system, the AI might just decide that it is.
Let’s not sugarcoat it—this is some stupid, infuriating nonsense, but it’s real. Reports from insiders who understand the inner workings of the insurance industry confirm that something is seriously wrong with the system. And while insurance companies might push back, claiming that all claims are reviewed by humans, let’s be honest: how much effort do you think those humans are actually putting into these reviews?
Since play work at home started with Covid do you think things are better or worse today when you try to contact someone at a company? How many times have you actually spoken with someone in this country where dogs or kids are in the background? Do you really believe that those employees are giving it their all?
Insurance companies love to assure us that every claim denial is carefully reviewed by a human being. But let’s make a bet: how thoroughly are these denials really being examined? Picture this—someone sitting at a desk, half-heartedly scrolling through claims while texting their spouse, checking Instagram, or playing Candy Crush. Do you think they’re giving your life-saving medication claim the attention it deserves? If they were, this kind of crap wouldn’t be happening.
What should be happening is HIPAA Compliance: AI systems must protect sensitive health information and ensure data privacy and security.
“And how was it that all of Baylor Scott And White’s database was hacked, stolen, and everyone’s personal information from medical, SS number, etc, stolen, with the lame statement from them that it happened, and an even lamer …sorry… you should watch your accounts and maybe change a password or something.”
CMS Guidance: The Centers for Medicare & Medicaid Services (CMS) requires that AI not be the sole decision-maker in coverage determinations, mandating human oversight to prevent unjust denials. That might sound good, but my personal experience was not that. I am still arguing with AI, and it is stuck on, Is, death right for you?”
Bias Prevention and Transparency: AI algorithms must be monitored to avoid bias and ensure fair outcomes. Transparency and explainability are crucial for maintaining trust and regulatory compliance.
Continuous Monitoring: Regular audits and updates are necessary to ensure ongoing compliance with evolving regulations.
The truth is, many of these so-called “human reviews” are likely rubber-stamped approvals of decisions already made by AI algorithms. The human oversight is often a formality, a box to check so the insurance company can claim they’re doing their due diligence. But in reality, the system is designed to prioritize cost-cutting over patient care.
ScienceSoft offers AI-powered claim management systems that can instantly detect and reject fraudulent claims, deliver accurate damage estimates, and provide intelligent recommendations for risk prevention. Their solutions leverage machine learning and other AI technologies to streamline the entire claims process.
Tractable is known for its deep learning and computer vision solutions, which automate the claim cycle, including medical insurance claim verification. Their technology enables remote inspection and instant loss assessment, reducing manual intervention and expediting claim resolution.
Fathom specializes in automating medical coding, a critical component of claim verification. Their AI platform analyzes clinical notes to accelerate billing and ensure accurate claim submissions for healthcare providers.
Keragon provides AI-powered automation for healthcare claims processing, including eligibility verification and claims submission. Their platform reduces manual data entry, accelerates verification, and automates routine administrative work, making it accessible even for non-technical staff.
You could make lots of noise and bitch to your congressman about this, or … pay full price, or… Well…this is just part of what is to come if we don’t push back.
Can we get an AI program that can talk to their AI program and work things out? Maybe #MAHA needs to get involved.
Those politicians who rely on lobbyist money need substantial funds for their campaigns. Who do you think has a better shot of getting treated like they give a shit?
Stay Healthy, My Friends, because AI might consider that death is right for you.
Make sure you sign up for e-mails and follow…you know the drill. This looks like a subject that needs to be in my book Stupid Shit, which is due to drop soon. -Best
While some of this content is hyperbole for dramatic effect, the truth is that claims are being reviewed and denied by AI. While I have touched on just the drug aspect of medical claims, one has to wonder how far AI goes in determining your health care.
Discover more from TheTimeDok
Subscribe to get the latest posts sent to your email.