The Uber Breach: Why Privileged Access Needs Offline Backups
An 18-year-old hacker just walked into Uber's entire infrastructure like he owned the place. And for a few hours, he basically did.
Let that sink in for a moment. Not a nation-state actor. Not a sophisticated criminal syndicate. A teenager. Using social engineering and a healthy dose of audacity, this kid compromised an employee's Slack account, then proceeded to gain administrative access to damn near everything—AWS, Google Cloud, the code repositories, internal tools, even the security systems themselves.
The hacker literally announced the breach in the company's Slack channel. That's not just a security failure. That's a flex.
When the keys to the kingdom are digital
Here's what keeps me up at night: once someone has admin access to your systems, all your fancy security measures become ornamental. Two-factor authentication? Doesn't matter if the attacker is already authenticated. Encryption? Great, except they've got the keys. Audit logs? They can probably delete those too.
And your backups? If they're managed through the same systems that just got compromised, congratulations—the attacker now has access to those as well. They can corrupt them. Encrypt them for ransom. Delete them entirely. Or, and this is becoming more common, sit quietly and wait to see what data is worth exfiltrating.
The Uber breach is a textbook example of what security folks call the "god mode problem." Once someone achieves administrative access, the game is essentially over. You're not defending against an attack anymore. You're conducting damage control.
Social engineering: the vulnerability we can't patch
You want to know the really frustrating part? This breach started with social engineering. The hacker didn't exploit some zero-day vulnerability or crack some sophisticated encryption. He just... asked nicely. Repeatedly. Via text message.
He spammed an Uber contractor with MFA push notifications until the person got tired of declining them and just accepted one. Then he claimed to be from IT. Classic.
We've spent billions of dollars on security tools. Firewalls, IDS/IPS, SIEM systems, zero-trust architectures, you name it. And all it took was annoying someone enough that they clicked "approve" to make it all irrelevant.
You can't patch human nature. You can train people, sure. You can implement better policies. But eventually, someone's going to have a bad day, or be in a hurry, or just not think it through. That's not a criticism—that's just being human.
Which means your security model needs to assume that eventually, someone's getting in.
The air gap is not negotiable
This is where the conversation about backups gets real. If your backup system can be accessed from your production network, it can be compromised along with everything else. Full stop.
I don't care how good your access controls are. I don't care about your zero-trust implementation or your fancy new security tool that promises to detect anomalies. If an attacker gains administrative access to your environment, and your backups are online and accessible from that environment, those backups are compromised.
The only backup that can't be corrupted by a compromised admin account is one that physically cannot be accessed from your production network. An air gap. An actual, literal disconnect.
Yes, I mean offline backups. Tapes, if you're old school. Immutable object storage if you're fancy. A NAS that's literally powered off except during backup windows. Something that cannot be touched by someone with cloud console access or database credentials or AWS root account privileges.
The immutability theater
Now, cloud providers will tell you they offer "immutable backups." And technically, they do. S3 Object Lock, Azure immutable blob storage, all that good stuff.
But here's the thing: immutability in the cloud is a policy, not a physical reality. It's software preventing deletion. Which is great until someone with sufficient privileges changes the policy, or accesses the backup through a different mechanism, or—and this is key—simply encrypts the backups with ransomware since the files are still accessible for read operations.
True immutability requires physical separation. It requires that the data literally cannot be modified or deleted, not because of policy, but because there's no pathway to reach it.
The 3-2-1 rule, but make it paranoid
The traditional 3-2-1 backup rule says: three copies of your data, two different media types, one offsite.
I'd like to propose an update for the modern threat landscape: 3-2-1-1. Three copies, two media types, one offsite, and at least one completely offline or immutable in a way that can't be undone by compromised credentials.
Keep your convenient cloud backups for quick restores. But also keep a copy that requires physical access or a multi-day process to delete. Make it inconvenient enough that even an admin with full access can't destroy all your backups in a fit of malice or during a ransomware attack.
Is this overkill? Ask Uber if they wish they'd been more paranoid about their security architecture.
Trust, but airgap
Look, I'm not suggesting we all go back to backing up on floppy disks and storing them in fireproof safes. The cloud is useful. Online backups are convenient. Administrative access is necessary for running modern infrastructure.
But we need to stop pretending that access controls alone are sufficient security. We need to design our systems assuming that those controls will eventually fail, either through technical exploit or human error.
Your backup strategy should survive a complete compromise of your production environment, including privileged access. If it can't, you don't have a backup strategy. You have a liability waiting to materialize.
The Uber hacker got in because security is hard and people are human. Fine. But the reason this breach had the potential to be truly catastrophic is because once he was in, there were apparently no systems that he couldn't touch.
Don't let that be your story.
—Still amazed an 18-year-old did what nation-states struggle with