Change Healthcare Ransomware: When Backups Are Compromised Too

For three weeks, a significant portion of the American healthcare system couldn't process prescriptions properly. Pharmacies couldn't verify insurance. Doctors couldn't submit claims. Patients couldn't get their medications. All because of a ransomware attack on a company you've probably never heard of.

Change Healthcare—owned by UnitedHealth Group, one of the largest healthcare companies in America—got hit by ALPHV/BlackCat ransomware. The attackers didn't just encrypt their production systems. They got the backups too.

And that's when everyone learned why the 3-2-1 backup rule exists.

The company behind the curtain

Change Healthcare processes about one in every three patient records in America. They're the middleman between doctors, pharmacies, insurers, and patients. If you've ever picked up a prescription and had it billed to insurance, Change Healthcare probably handled that transaction.

They're infrastructure. The kind of critical system that works so smoothly most people don't even know it exists—until it doesn't work anymore.

On February 21st, it stopped working. The attackers got in through a Citrix portal that, inexplicably, wasn't protected with multi-factor authentication. Once inside, they moved laterally through the network, encrypted everything they could, and exfiltrated six terabytes of data.

Then they demanded payment. UnitedHealth reportedly paid $22 million to get their systems back. And then—in a move that would be hilarious if it wasn't so serious—the ransomware gang imploded in an internal dispute, and the data got leaked anyway.

You can't make this stuff up.

The backup catastrophe

Here's the part that should scare the hell out of anyone responsible for IT infrastructure: Change Healthcare had backups. But the attackers compromised those too.

This is not theoretical. This is not a edge case. This is what happens when your backup strategy is "copy everything to another folder in the same infrastructure."

If your backups are accessible from your production environment, they're not backups—they're just more copies for the ransomware to encrypt.

Think about how most companies do backups. They take snapshots of their databases. They replicate VMs to another datacenter. They sync files to cloud storage. All managed through the same network, using the same credentials, accessible from the same compromised admin accounts.

That's not a backup strategy. That's a false sense of security.

The 3-2-1 rule exists for a reason

For those who somehow haven't heard this a thousand times: 3-2-1 means three copies of your data, on two different types of media, with one copy offsite.

But here's the part people always miss: that offsite copy needs to be offline or immutable. Not just "in a different region." Not just "in a different account." Truly air-gapped or locked in a way that even an admin with full access can't delete or encrypt it.

Change Healthcare apparently didn't do this. Or if they did, they did it wrong. Either way, when the ransomware spread through their network, it took their backups with it.

And suddenly, paying the ransom became the only option. Because you can't restore from backups that have been encrypted along with everything else.

The real cost of downtime

The Change Healthcare attack disrupted healthcare delivery across the entire country for weeks. Pharmacies couldn't process prescriptions. Doctors couldn't bill for services. Patients couldn't get medications. Some healthcare providers were reportedly losing millions of dollars per day.

The total cost? UnitedHealth estimates over $2 billion. That's not a typo. Billion. With a B.

And that doesn't even account for the human cost. People couldn't get their medications. Critical treatments were delayed. In a healthcare system that's already stretched thin, a three-week disruption to prescription processing is not just inconvenient—it's dangerous.

All because someone didn't implement basic security hygiene on a Citrix portal, and because the backup strategy wasn't designed to survive a determined attacker.

Why immutability matters

There's been a lot of talk in the backup world about "immutable backups." The idea is simple: once data is written, it cannot be modified or deleted for a set period of time. Not by admins. Not by ransomware. Not by anyone.

Cloud providers offer this through features like S3 Object Lock or Azure Immutable Blob Storage. Some backup appliances have "air gap" modes where data is physically isolated from the network after being written.

But—and this is crucial—immutability only works if it's actually immutable. If an admin can disable it, or if there's a privileged account that can override it, then it's not real immutability. It's just a policy that can be changed.

Real immutability means that even if every system in your infrastructure is compromised, even if every admin account is taken over, the backups remain untouchable until the retention period expires.

That's the kind of backup that survives a ransomware attack. Everything else is negotiable.

The offline alternative

The other approach—and arguably more reliable—is truly offline backups. Tape libraries that are physically disconnected after backup jobs complete. NAS devices that are powered off except during backup windows. USB drives in a safe.

Yes, it's inconvenient. Yes, recovery takes longer. But you know what's more inconvenient? Not being able to restore anything because the ransomware got your backups.

I've been in this industry long enough to remember when tape was the standard. Everyone complained about it. Too slow. Too manual. Too old-fashioned.

But tape had one massive advantage: once you ejected it and put it on a shelf, no attacker in the world could touch it without physically breaking into your facility.

We've gotten so enamored with the convenience of always-online, instantly-accessible backups that we've forgotten why air gaps existed in the first place.

The lesson nobody will learn

Here's what's going to happen: Change Healthcare will rebuild their infrastructure. They'll implement better security controls. They'll fix their backup strategy. There will be congressional hearings and regulatory fines and promises that this will never happen again.

And then, in a year or two, some other critical infrastructure company will get hit with ransomware. Their backups will be compromised. Essential services will go offline. Everyone will act shocked.

Because the lesson here isn't just about Change Healthcare. It's about an entire industry that has collectively decided that convenience is more important than resilience, that cost-cutting trumps redundancy, and that "good enough" security practices are sufficient for critical infrastructure.

They're not.

What you should actually do

If you're responsible for backup strategy—and honestly, if you have data worth keeping, that's everyone—here's what you need:

Multiple backup destinations. Not multiple copies in the same place. Different providers. Different infrastructure. Different authentication mechanisms.

At least one offline or immutable copy. Not "kind of offline." Not "hard to delete." Actually offline, or locked with real immutability that can't be overridden.

Regular restore testing. Your backups are worthless if you can't actually restore from them. Test it. Regularly. With real data. In a scenario that assumes your primary infrastructure is completely compromised.

Assume compromise. Don't design your backup strategy around preventing attacks. Design it around surviving attacks. Because you will be attacked, and eventually, one of those attacks will succeed.

The Change Healthcare attack cost billions of dollars and disrupted healthcare for millions of people because someone made the classic mistake: they assumed their backups would be there when they needed them.

Don't make the same mistake.

—Checking my backup destinations again. And again.