Ransomware is where hackers gain access to a computer system and then prevent its legitimate users from accessing the data until they pay a ransom. Hospitals have often been targeted. The city government of Atlanta is the latest victim of this practice, having been locked out of its computers since last Thursday.
I presume all these institutions do backups of their systems pretty much all the time. So why, when attacked like this, can they not do a complete wipe of their systems and then reload using the backup? It may take some time, of course, but that can’t be why they don’t do it since the Atlanta lockout has gone on for almost a week.
I know that there are computer experts who read this blog. Can anyone explain this?
A Rash Anion says
It depends on what’s compromised, and whether or not it was actually backed up. It’s pretty common for firms and the government to have really bad data security practices, and it’s possible that access to the backup has also been compromised. For example, perhaps the backup is connected to the network and was also infected. The intruders got access to the other server hosting the backup, encrypted it with their own key, and now they have everything.
It’s also possible that the backup is on paper media rather than digital media, or is several weeks, months, or even years out of date. It’s possible that the hardware in place allows easy writing to the backup (say if the backup is on tapes rather than drives) but not easy reading. Perhaps a backup system silently failed, or stopped getting updated sometime back.
In IT, as we say, “you always have one less backup than you think you do”
Also, there may not have been a backup at all. Many organizations have astonishingly incompetent IT practices. For individual/personal instances of ransomware, it’s almost always the case that there’s no backup and so they are completely lost and must meet the ransomer’s demand.
grasshopper says
One guess is that possibly the backups also contain ransomware, which stayed hidden for weeks or months before activating itself, and so was incorporated into the backup data as well. Straight data is not an executable, I know, but there are clever people out there.
Dunc says
Generally speaking, backups should be the solution to ransomware. They’re not in practice because almost everybody is terrible at backups. People are terrible at backup and disaster recovery of mission-critical servers, and I’ve never worked with any organisation that even seriously attempted to backup desktops. Combine that with a lot of users keeping critical data locally, or on non-backed-up fileshares, and you’ve got a recipe for disaster.
People suck at backup. Hell, I suck at backup.
Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says
In quite a few cases there are laws mandating certain records be kept. Backups will always lag behind local storage (and, as others have implied above, the way to keep the lag short is to keep the back-up server permanently on the same network, but if its network connection to the ransomed computers is always on, then it is going to be vulnerable to the same ransomware attack).
So let’s say you keep backups on a server that disconnects itself from the network during the majority of the day, then at night activates for an hour or six and copies network files to itself for secure storage. This auto-backup at the discretion of the server can be made to copy files without running them, and since it is the server software responsible for copying the files, the local computers have a harder time passing on the ransomware. Though ransomware probably does not reveal itself right away (the better to spread to every computer on a network), there’s also the chance that the ransom attack is initiated and revealed without an intervening backup procedure (so the server can be instructed not to communicate with the infected machines and thus be safe).
However, in this safer version of backup, you inevitably sacrifice some files.
While that works just fine for the majority of businesses, if you are legally required to keep records -- or if you are required by the nature of your business, such as that of a hospital (who has legal requirements but also practical ones for those hospitals that actually care if the right amount of medication gets delivered to patients the right number of times) -- you can’t simply write records off. To do so without an attempt at due diligence in retrieving the files (and due diligence in the context of ransomware has yet to be legally well-defined) might very well be a violation of law that can put your agency in hot water, and in some cases might even constitute a criminal violation (though prosecutors would be unlikely to prosecute a victim in that circumstance, it’s a legal risk worth noting).
Further, what happens when you don’t know for sure how long ago the last safe back-up was? You can restore from last night’s backup and have the ransomware lock down the computer again because of the lag between infection and symptom. If you go back far enough -- several days, a week? -- you might lose a certain amount of data that isn’t easily replaceable, if it’s replaceable at all. In those cases, it might simply be economically efficient (depending on the ransom asked) to pay up.
Rather than a backup as we traditionally conceive it where all data (files, apps, OS, etc.) are securely stored on another machine in a way that allows the recreation of the machine in its last known state, it might be more effective to have a system that backs up certain kinds of data (e.g. all Word documents, etc., etc.) without backing up the OS and certain other kinds of data (e.g. executable files).
That would allow you to wipe a hard drive, restore the factory OS, then move over all file types backed up safely, since none of them are of a type that could carry a virus. But wiping a machine and restoring its OS deletes mountains of useful settings and data -- like accounts and their passwords for machines with multiple users, etc. That might be okay for some workplaces, especially ones where the work product is largely text (say, a law firm) and where the number of machines needing to be restored is fairly low. But hospitals, for instance, run many custom apps designed to create things like pop-up notices to ensure timely delivery of medications.
It’s simply not an easy problem, and the solutions that work for certain environments don’t transfer to other environments with different legal or practical considerations.
On the plus side, doing a full backup of your personal computer on one drive, coupled with safe-file backups to another external drive, easily allows you as an individual to restore you computer’s exact settings, with all accounts and passwords from the last safe backup (which will probably be no more than a week old) and then import more recently backed up files from your other backup drive. It will take several hours, but you aren’t on the clock and can set the computer to restore from backup while you go about fixing and eating a nice falafel salad then getting the kids to bed. The backup of safe files will go even more quickly. And the amount of time to communicate with the ransomware authors would probably take at least as long.
So go ahead and use backups (complete image backups + non-executable file backups) for your home computers. But governments and hospitals will still be struggling to create a system that meets their own needs months and years from now.
Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says
@Dunc
For all of my verbiage above, I, too, suck at backup.
Mano Singham says
Surely all the backup services that people use must have a system to prevent the ransomware software gaining access to the saved data? Otherwise their business model would be useless since the ransomware could spread from the backups of one infected user to all their clients.
Marcus Ranum says
A shocking number of organizations allow desktop users to not back up systems. So, there is an unknown overhang of stuff on the network that nobody knows if it matters or not. When the malware comes along, suddenly everyone discovers that their data was more important than they thought.
Also sometimes the virus is really a boot-loader for a whole stack of malware including remote control backdoors. At that point you may have the attackers in control of internal network infrastructure or servers. Then, the target is fucked. The kind of response it takes to clean out a major attack can run to hundreds of thousands of dollars.
There are a lot of places for false economies in IT.
Marcus Ranum says
I’ve done a couple postings over at stderr on backup policies …
It’s nuts to risk losing your data when a USB SSD drive is about $50 (depending on your archive)
Remember: one local copy that you use, one local copy that you back up to/resync weekly or monthly, one remote copy in a safe deposit box at the bank.
You only really “have” your data if there are 3 copies. (One can be cloud. But clouds have been known to fail and if your data is encrypted by malware and the crypto-locked version syncs to the cloud you’ve officially outsmarted yourself)
Marcus Ranum says
It sounds like the Atlanta lockout is cryptolocker plus active attackers. It also sounds like the infrastructure there has active attackers owning it and interfering with attempts to regain control. That’s a worse-case scenario (the very worst being a rogue sysadmin who decides to take away control of the network fabric, as happened in San Francisco a few year back)
jrkrideau says
My home computer seemed to have been attacked by ransomeware last year, at least if the screen display could be believed. Looked pretty amateurish to me but it did freeze the computer.
I am almost good on backup—sometimes. Anyway, I thought about it for a couple of minutes, decided there was noting on the machine, that if I had not backed up, that I could not either replace or afford to lose.
I did a hard reboot and returned to whatever I was doing—possibly even read Mano’s blog. Thank heavens most ransomeware seems aimed at Windows. My Ubuntu Linux |OS never seemed to have noticed the attack.
TGAP Dad says
There’s also a simple matter that the desktop is commonly the target of choice for an attack. For an enterprise network, the desktops are not usually backed up, due to space, detachability, unique architecture, … In these cases, backups are routinely done for network storage only. So data on the desktop are completely susceptible to ransomware. Of course, once a single computer is infected, the ransomware can spread further, potentially until a critical mass is achieved before activating all at once. (All of this presupposes that these infections only exist at the desktop level, not the network.)
This is an argument for a net-boot configuration, where your desktop’s “hard drive” is actually a standard configuration disk image loaded from the network with each startup. No matter what gets installed, loaded, etc. the next startup loads the network image anew.
At home, we run Apple machines with the time machine backups turned on. No backup is more than an hour old, with the ability to restore any files (even a single) file back to a point in time. I also do periodic backups with a disk cloned.
Generally, though, our data are maintained on cloud drives.
lanir says
Backups for organizations suck. They tend to require long periods of time to perform, generally don’t happen more than once a day (if that), and have some really terrible and confusing interfaces. Because of this almost no one likes to deal with them. They’re also pretty boring to verify as well as sucking up a lot of space if you actually unpack them to do it. Also, very few organizations are willing to spend the money to back up everything. So no matter what you do or how cleverly you manage to juggle priorities versus available resources, sooner or later you’ll inevitably be missing a file when it’s needed.
All this basically means that once you start dealing with it, it stays on your plate for quite awhile and it’s unpleasant the entire time it’s there. Also, almost no one cares about it until it’s time to panic.
Dunc says
Mano, @ #6
The malware can’t spread from files that are just sitting there, it has to be actually executed -- but when it is, it can affect (but not infect) all data files that it can access. So if you’ve got a backup file sitting on a file share that you have access to all the time, it can be locked by the ransomware when it runs.
Marcus, @ #8:
Unless, of course, your cloud service allows you to roll back to previous versions of the files… If it doesn’t, you should probably consider getting a cloud service that doesn’t suck. For example, just having a look at my KeePass database on OneDrive (which is the file I change most often, and would least like to lose), I see I have 3 months of version history there. If it gets locked on one of my machines and synced up to the cloud, all I have to do is shut that machine down and roll back to the previous version.
(Of course, I also have a copy of my KeePass db on a USB key on my keyring…)
hyphenman says
Back in the dark ages of Internet Computing, CompuServe was originally conceived of and operated as a massive back-up facility for corporations. The company maintained two identical mirror sites in central Ohio that at the end of each business day, backed up all the mainframe (micro computers were still pretty much still a dream) files of corporations subscribing to the service.
That the company had massive, underutilized computing power—the DEC mainframes were idle for about 20 out of 24 hours a day—was the reason that CompuServe got involved with what we called VideoText back in the day, email and online chat on a whole TWO channels (CB1 and CB2).
I have automatic daily backups to the cloud and a server farm in New Jersey. and I use flash drives that I rotate weekly in and out of my safety deposit box and a document safe in my basement.
It’s not paranoia if they’re really out to get you. : )
Daniel Schealler says
An additional issue on top of all of this is that if you don’t test restoring your system from backup regularly, then you don’t actually know that your backup is as recoverable as you think it is, and you don’t know what kind of downtime you’re looking at.
Suppose an organization has 100% recoverable backups for all servers and infrastructures. But that organization doesn’t do regular recovery testing. So no-one at the organization can answer the question of how long it will take to restore the backups.
If it takes an hour or so, it’s probably worth it. But it could take days or even weeks. If it’s an unknown, it’s an unknown.
Additionally, once you start recovering a backup, if it takes too long you might not be able to undo the attempt.
If the ransom amount asked for is X, and it’s possible that full data recovery might involve downtime that will result in Y loss of earnings, and Y > X, then then the cost/benefit analysis is probably going to be to pay X to save Y -- X.
So the question becomes, why not do regular recovery testing? Well, yes, you can do that. But if the cost of doing that on a regular basis is greater than Y -- X, then you might not be saving any actual money.
Depending on the values for X and Y, this equation can shift around. But a lot of the time you don’t know the values of X and Y until it’s too late.
Information security is a bastard of a problem. If it was easy it’d be solved already.
Charlie Kaufman says
Three issues I didn’t see mentioned yet:
1) Even people who are fairly diligent about making backups rarely test their recovery procedures. That’s in part because those procedures will usually lose data if they fail, and they are afraid. Procedures that are rarely tested rarely work.
2) Most people when they designed their backup procedures will thinking about hardware problems like failed hard drives and not about security attacks. So they might think they are safe if they do something like having a shadow disk drive.
3) For many applications, being able to back up to yesterday’s data or even hour old data is not good enough. Think about a system that records real estate tax payments or car registration transfers. Good backups for these sorts of systems requires some way to “redo” any lost transactions, introducing more complexity to both computer software and manual procedures.
None of these are good excuses, but they are explanations. We’ve been spoiled by a relatively benign environment and we’ve gotten sloppy. This really needs to change.