It sounds as though the National Republican Senatorial Committee has a problem with its email servers, too. Let me shorten this somewhat: if you’re putting Microsoft Exchange on an internet-facing server, and you’re not managing it fairly carefully, you’re snack food for hackers.
TL;DR form: it’s hardly news-worthy. Just another example of poor system administration resulting in a security breach. The “Russia” angle is irrelevant; the attack appears to be basic credit card fraud, which knows no politics. As far as a target goes, the NSRC was – sorry NSRC people if you’re reading this – a blip. There are groups of hackers that make a very good living going after “mom and pop” websites that don’t invest the time and administrative resources necessary to keep a site secure. NSRC appears to be a “mom and pop” bunch of amateurs.
In system administration there’s an observation that “there is no such thing as a ‘temporary’ server” – once things go into production, it gets difficult to take them out, or to move them. The corollary to that is “if it’s not broken, don’t fix it.” Those two things, taken together, are behind a great deal of the computer security problem – someone sets something up, it works, and then they don’t want to touch it. Meanwhile, new exploits and vulnerabilities are discovered and published and the software/system, which hasn’t changed at all, is now easy for someone to walk right into. Software rots over time! It’s extremely counter-intuitive but it’s true: the exact same code’s security properties change over time because of nothing in the code itself except the discovery of latent bugs. I’m not sure how to describe this phenomenon in terms of entropy, but: the system remains unchanged; what changes is the hackers’ collective knowledge about it.
The compromised system was one of thousands that had been compromised using the same technique, and had the same type of skimming software installed. Given the number of sites, the attack was almost certainly automated; the attackers probably didn’t know or care who they had broken into. The Ars Technica article offers a fairly typical estimate that that site may have netted the hackers on the order of $600,000. Multiply that by thousands and you’re dealing with real money.
Many of us security types used to be highly critical of cloud computing or software as a service (SAAS) – the reason being that it presupposes that an organization is going to push its critical information assets into someone else’s hands. For example, I still run my own email server rather than using google mail or Yahoo! mail, for reasons that ought to be obvious: pushing your email into the cloud means that you’re automatically sharing it with the NSA/FBI. If the reason you’re keeping data is because it’s sensitive, pushing that data to the cloud appears to be irresponsible.
Unless you’re incompetent.
If your security sucks, pushing your data to the cloud is a great idea because it’s a tremendous improvement over poorly managed local servers. In one of my other postings, on Hillary Clinton’s email server, I responded to the question:
What do you think of the notion, given the incompetence of the government in keeping hackers out of their computer systems, that The Secretary’s private email system might actually been no worse than the State Department’s system, which, apparently, is known to have been hacked into?
In the case of the NSRC server, putting it in the cloud would have been a huge improvement. It appears that someone set that server up, saw that it worked, and went off and did something else. Hillary Clinton, who claims to be technologically inept, retained some reasonably good system administration. The question there is whether Clinton’s personal server was better or worse than the US State Department’s email security system.
Today, if you want an email server in the cloud, Amazon web services offers a cloud-hosted Exchange compatible system for about $4/user/month. If your systems administration capability is nil, then that’s a great deal, because the alternative is freighted with unseen costs in the form of security downside. The reason so many people get this wrong is because the cost/benefit analysis is skewed by our inability to project the costs of the unknown downside that occurs unpredictably.
Hat-tip to lorn for encouraging me to comment on this bit of news.
Ars Technica on the hack
John Scalzi’s views on the cost/benefit analysis of using WordPress VIP as a service
WIRED: How a Remote Town in Romania Has Become Hacker Central
Gwillem on online skimming
Techtarget: Six Commonly Overlooked Exchange Server Vulnerabilities
Dunc says
This is the classic cloud pitch. Proper administration is fairly hard and actually quite expensive.
Unless you’re encrypting your email, you’re sharing it with the NSA/FBI anyway. And if you are encrypting your email, they’re watching you extra closely.
The issues around cloud storage of “sensitive” data aren’t quite a straight-forward as you imply here… It depends very much on what exactly you mean by “sensitive”, and what your threat model looks like. If you want to keep data out of the hands of the spooks, then sure, going to the cloud is probably a bad idea. On the other hand, if you want to keep it out of the hands of generic bad guys, then it’s quite possibly a very good idea, depending on exactly how you go about it.
Consider: one of the most common means by which sensitive data leaks is physically: by people leaving inadequately secured laptops lying around, by carelessly junking machines with readable data on the drives, or by people physically breaking in and stealing stuff. If that data is stored in the cloud instead, then good luck even figuring out which drive it’s on, because the virtual file system is probably striped across multiple physical drives. Typically, even the people running the service can’t actually physically locate your data more closely than “it’s somewhere in this datacenter”. Also, any cloud service worth looking at will offer both in-transit and at-rest encryption as out-of-the-box options (if not defaults), and provide decent access logs and analytics.
Pierce R. Butler says
The RNSC has its technical act way ahead of the Trump Org’s Windows Server 2003 setup.
Marcus Ranum says
Pierce R. Butler@#2:
Responding to Motherboard’s story, a Trump Organization spokesperson said: “The Trump Organization deploys best in class firewall and anti-vulnerability technology with constant 24/7 monitoring. Our infrastructure is vast and leverages multiple platforms which are consistently monitored and upgraded using current cyber security best practices.”
See? They have the best and yugest security ever. Vladimir Putin couldn’t get in if you hammered him into it sideways. He has the best people, the best software, the best server – it’s running on a 1.2Ghz Pentium with 1GB of RAM – it’s awesome!
Marcus Ranum says
Joking aside, it’s hard to comment about some of this stuff without being able to examine the architecture. I could see a set-up where there was an antiquated system with an old vulnerable version of Exchange on it, sitting behind a VPN concentrator that only allowed a small number of devices to talk to it, at all. I.e.: a mail enclave that didn’t accept inbound email from anywhere and only allowed inter-user and outbound messaging. That’d be fairly tight and could be set up with some interesting cross-checks (e.g: firewall rules that would generate an alert if they ever saw anything but outbound port 25 TCP) That’d be the kind of server I’d set up if I was running a conspiracy, or a presidential campaign. ;)
Marcus Ranum says
Dunc@#1:
This is the classic cloud pitch. Proper administration is fairly hard and actually quite expensive.
Yup. And if you think system administration is expensive, you should try poorly adminstered systems!! I’ve seen a lot of organizations discover the hard way that they had made false savings.
Unless you’re encrypting your email, you’re sharing it with the NSA/FBI anyway. And if you are encrypting your email, they’re watching you extra closely.
If you’re encrypting your email, unless you’re using something really off the grid, they’re probably pushing right through your crypto, and tagging your metadata as “attempted to secure” …
The issues around cloud storage of “sensitive” data aren’t quite a straight-forward as you imply here… It depends very much on what exactly you mean by “sensitive”, and what your threat model looks like. If you want to keep data out of the hands of the spooks, then sure, going to the cloud is probably a bad idea. On the other hand, if you want to keep it out of the hands of generic bad guys, then it’s quite possibly a very good idea, depending on exactly how you go about it.
Yes, that’s what I meant. You need to have a pretty good idea what your threat-model is before you even start playing the game. From the sounds of it, the RNSC didn’t get to the first step – very few organizations (relative to the entire population) do. It’s just way too easy to dust your hands off and conclude, “there. that’s that.” If your threat model includes sophisticated threats, then most of the technical options on the table are not good enough; that’s a problem indeed. And it’s one of the reasons I get all head explodey when there is silly talk about Chinese cyberspies (is it the Chinese this week, or the Russians?) when you see the kind of lengths the NSA has gone to, in order to compromise and backdoor systems, the only game in town is to not play at all.
Consider: one of the most common means by which sensitive data leaks is physically: by people leaving inadequately secured laptops lying around, by carelessly junking machines with readable data on the drives, or by people physically breaking in and stealing stuff. If that data is stored in the cloud instead, then good luck even figuring out which drive it’s on, because the virtual file system is probably striped across multiple physical drives. Typically, even the people running the service can’t actually physically locate your data more closely than “it’s somewhere in this datacenter”.
That the virtual system is probably striped isn’t going to change things much – the mapping of where it is will be found in the storage system controller’s database. If you go after the encrypted bits you’ve got a perfect set-up for an offline attack and your opponent has to do perfect key-management (which means there are no backup systems or recovery paths) or you can get at them that way. They won’t try to physically locate your data because they don’t have to: it’s all in a database and, for the database to be usable, it’s unencrypted in memory somewhere.
Funny story about physical access: I know the guy who used to be head of security for one of Savvis’ data centers. One day the security guard at the desk calls and says “The FBI is here to see you and they are in the lobby and they are being very truculent.” So he grabs a bicycle and asshauls across the data center to the lobby and talks to the FBI guys: “We’re here to seize the server” says the FBI guy. “Which server?” says Eric. The FBI guy says “all of them pertaining to [redacted]” and Eric says “Did you bring 6 semitrailers? Because that’s about what you’ll need.” Of course he was able to talk them down into something more reasonable and they left with a couple of USB hard drives in their briefcases.
EnlightenmentLiberal says
Serious question: I would assume that some of the off-the-counter GNU stuff is pretty secure, and the NSA is not going to be breaking that encryption routinely. Surely this is true. Are you saying that they managed to break privatekey publickey encryption? Or are you saying that we already have NSA worms on all of our computers? Or are you saying that there are (purposeful?) vulnerabilities baked into common GNU packages for this purpose?
Of course they can track sender and receiver trivially, but the content can be indecipherable even by the NSA.
John Morales says
EnlightenmentLiberal:
<snicker>
“Can be”, if it’s not compromised.
Here’s something in the public domain: BULLRUN.
In passing — you imagine your processor chips, your USB sticks (or cables, even) are secure? Heh.
(You imagine you can — even in principle — vet all outgoing packets with uncompromised hardware? If not, you’re not secure)
Dunc says
True, but it still makes stealing the data a few orders of magnitude harder than simply stealing a laptop, or grabbing a machine from under somebody’s desk. Remember, even when people are forced to use disk encryption, most people keep the password on a post-it note along with the machine. I’ve seen an organisation mandate BitLocker on all of their laptops, then use the same easily guessable password for all of them… And I’ll wager than better than 50% of those laptops have the password written on a piece of paper in their bag. They also used RSA keys for logins, which were mostly kept in laptop bags too, with the constant portion of the password written on a scrap of paper taped to the back of the key.
I guess we’re probably dealing with different spaces – I do a lot of work with people for whom security is, at best, an afterthought, and often not thought of at all. These people aren’t worried about sophisticated attackers – they mostly need to protected from their own stupidity and incompetence. Heck, I don’t even deal with security – I’m just an application developer – and I give it more thought than most of these chumps.
Marcus Ranum says
I would assume that some of the off-the-counter GNU stuff is pretty secure, and the NSA is not going to be breaking that encryption routinely. Surely this is true.
It’s really hard to say, but I wouldn’t say your certainty is warranted. The codebreakers have been very successful in the past and I see no reason to assume they have suddenly become incompetent, or that software engineers have somehow learned to write perfect code. Standalone applications like gpg and truecrypt are believed by many to be problematic for the NSA, but I’m skeptical. Certainly, those tools might be able to – if correctly used – raise the cost to the point where it’s worth attacking another part of the communications channel. The key words there are “correctly used” and “communications channel.” In my post about traffic analysis I discussed how the very term “traffic analysis” was Fight-club-style classified for a long time; there’s another term “target analysis” which similarly refers to the methodology of analyzing and scoring the components of a communications architecture, and costing out (in terms of time and effort) where it’s subject to attack. I’ve had discussions with squirrel-people who say that tools like truecrypt and PGP are why there are hardware keyskimmers and in-device bugs: it shows that there was a need for means to collect passphrases because cracking them had become too difficult. Unfortunately, that squirrel-logic doesn’t convey much information because the well-funded and sensible attacker would never rely on just one mechanism; they would have a whole array of techniques and would use the most appropriate one based on analysis of the target. It gets complicated; let me give you just an example: suppose the FBI comes and asks the NSA to break a certain person’s communications. Target analysis plus knowledge of how FBI works will immediately take certain options off the table (the FBI would have already tried to plant a physical bug in the target’s vicinity or computer) and the NSA may not trust the FBI (scratch “may” there, I was being silly: they absolutely don’t) to give them one of their top-drawer exploits. So they might offer to attempt an offline attack on the data, even though it might take longer, but it’s a known technique and they control how much they are revealing about their capabilities (“systems and methods”)
There are whole classes of attacks that seem obvious against some of the open source systems, which I haven’t seen performed. That has always surprised me and made me suspicious. I won’t offer to bet, because I might die of old age before I could collect, but I’d be willing to bet that the stuff we’ve heard about is the tip of a largeish iceberg. Most of the attacks we’ve been seeing are practical cracks into communications security systems; the theoreticians’ work remains largely unseen and it’s the really deadly stuff.
Are you saying that they managed to break privatekey publickey encryption?
Some implementations have definitely been broken. Pace Vernor Vinge’s delicious sideswipe in “A fire upon the deep” (“remember when people used to believe in public key cryptography?”) there may not be any major breaks there. But every cryptosystem except the latest has been broken, and it’s a major paradigm-shift for the target each time.
Here’s an example that points to a potential implementation problem:
http://eprint.iacr.org/2012/064.pdf
If the keyspace for public keys depends on a bad randomness sampler, there’s a big nasty happening right now, right under our noses. And there are lots of such nasties.
Or are you saying that we already have NSA worms on all of our computers?
I doubt that, unless they are amazingly well-hidden. The problem with worms is that once you find them, it’s really hard to explain the presence of worm-code. Take this, for example:
http://www.reuters.com/article/us-usa-cyberspying-idUSKBN0LK1QV20150217
Here’s the problem: in a modern computer, there is no layering between abstractions; they are simply expected to behave. Your graphics card has complete access to the bus and can do DMA and can stick stuff to/from system memory. And every so often, its upper layer software checks its status and then phones home to nVidia or whatever and checks to see if it needs to download a new device driver. Aaaand there go your crypto keys.
I saw a demonstration at CANSEC West a few years ago (I tried briefly to find the reference but I have airport internets here, if you really need it LMK and I’ll dig deeper later) where a fellow exploited a buffer overflow in a smart cellphone’s antenna controller chip which exists on the system bus, of course, and from there patched the running process table in the operating system kernel running on the main CPU and created a root shell bound to a port. That was a thing of beauty, it probably took him months to figure that out. All it takes is one buffer overrun in some bluetooth chip negotiation and you can set a device up in an airport that invisibly and instantly vacuums out encryption keys from any phones of a certain type that come in range.
I own a Mossad-modified Samsung cryptophone (a squirrel-friend of mine gave it to me for Xmas, a touching gift which I have not turned on anywhere near my network…) it’s running a modified IP stack under android that syslogs all the handoffs and transitions in the cell layers, and all the options negotiations and various signal strengths, etc. It’s a passive system: you turn it on and walk around an area and the logfiles can be analyzed later and you can fingerprint all the various “stingrays” and systems that are interacting with your phone. I’ve seen the logs that it generates and it’s really sobering. The worst was the home-built rogue access point across the street from a girls’ high school.
Or are you saying that there are (purposeful?) vulnerabilities baked into common GNU packages for this purpose?
I would be shocked if there weren’t.
One of my coders from back in the NFR days went on to work for Intel (he is a stack programming genius) and had his hands up to the elbows in Intel’s merged device driver package for their network physical interfaces. The way that works is there’s a composite driver with a top half that talks to any of the bottom halves of all the devices they make. He once casually mentioned that he could make one or two subtle mistakes in his code that he could turn around and sell for $250k apiece, easy.
Back when I worked for Trusted Information Systems, one of the guys I hung out with a fair bit was Carl Ellison. Once over lunch, I hypothesized that it ought to be possible to build a protocol that looked exactly like a diffie-hellman key exchange to an observer on the wire, but which actually relied on an embedded secret. This was back when Clipper Chip was the thing of the day, and Carl wandered off and designed it. That was how I became co-author on my only cite on cryptography (I think it was in the rump session at Crypto ’94) There was a rather famous bug in one public key implementation where the programmer had hardcoded a reference value from the random number generator, for compatibility testing, and forgot to #undef that before the code shipped. Such an easy mistake.
Of course they can track sender and receiver trivially, but the content can be indecipherable even by the NSA.
I have the dubious distinction of having coined the term “rubber hose cryptanalysis” – the problem being that if your codes are unbreakable, the only thing left they can break is you.
(pats you on the head) sure they can’t. The computer is your friend.
Marcus Ranum says
Dunc@#8:
True, but it still makes stealing the data a few orders of magnitude harder than simply stealing a laptop, or grabbing a machine from under somebody’s desk.
No, it makes it vastly easier, because it’s able to use the system’s own data-locating capabilities against itself. Stealing a laptop is a terrible move because it means you were in a position to gain physical access to the laptop, but displaced it. It would be far more valuable to take over the laptop and leave it in place as your agent – besides, that way you’d be able to get any credentials off that laptop as the user presented them.
You should take a look at this:
https://www.pwnieexpress.com/
If I can get to a laptop, I can drop a pwn plug in a cubicle. And, once I’ve done that, I don’t need the laptop. There are also USB sticks loaded with takeover code you can just swipe into a laptop and now it’s your remote management terminal.
You’re right that physical access to a target system makes things easier, but when you’re attacking a system, you really want its management software; after all, that’s what it’s for. With some kind of file archive/file database, at that point it doesn’t matter where it’s striped to, or what its encryption keys are, etc, it’s all just bits sitting in a bit-vault with a management interface designed to make them locatable and decryptable.
Remember, even when people are forced to use disk encryption, most people keep the password on a post-it note along with the machine. I’ve seen an organisation mandate BitLocker on all of their laptops, then use the same easily guessable password for all of them…
I dunno about “most people” – I don’t hang out with them. Just to give you an example: I don’t even know any of my encryption keys. I know how to get hold of them when I need them but they’re all much larger and more complicated than I’d even want to write down on an 8×10 sheet of paper (with my handwriting and a crayon) let alone a post-it note.
I think the worst I’ve ever seen was one sales guy at a company who cleverly taught the sales team to use a zip tie to attach RSA keyfobs to the security lock hole in laptops, “so you can’t lose it and they won’t get separated.” People aren’t very good at this security stuff – you have to think it through over and over again and look at all the angles and if you miss just one: poof, game over.
Yes, you’re right, a lot of companies do it wrong. That’s why internet security is such a great sea of suck.
Marcus Ranum says
John Morales@#7:
Here’s something in the public domain: BULLRUN.
That was part of an ongoing strategy of mooting encryption systems which the NSA has been doing pretty much forever. (Search for “NSA Crypto AG” for one example, or “NSA Hagelin” for another) beware of squirrel-people bearing gifts!!! Especially if those gifts are in the form of algorithm improvements for your key exchange.
Dunc says
I’m not really talking about spook stuff here… More grabbing your competitors customer list, or the HR payroll spreadsheet. Amateur shit. Sure, there are limits to how far you can protect yourself against determined and capable attackers, but most attackers are neither, and you can avoid a lot of trouble by not being the low-hanging fruit.
Having said that, here in the UK, there’s a long history of spooks, ministers, and military brass leaving their laptops in taxis and on trains….
EnlightenmentLiberal says
Thanks for theexplanation.
eddie says
I suspect the situation is way worse than people think. There’s no point in encrypting data if at some point you actually have to type in a password, in plain text, to a kb device that has a logger built into it’s driver. I mean, the NSA might have it’s big supercomputers trying to crack passwords, but one of the things the CIA does best is to recruit compromised people to do what they want, in this case, code the back-door in from the ground up.
Marcus Ranum says
eddie@#14:
CIA probably isn’t the guys hiring people to do code; that’s the NSA’s thing. And, yeah, the NSA has compromised all of the endpoints – they don’t need to crack keys because they can read the keys out of your system’s memory.