Recently I was invited to do a talk for the Minnesota ISSA at their June chapter meeting; I hope they don’t regret it.
If you haven’t done conference speaking, the routine is usually that you get contacted with an invitation, say “yes” or “no”, nail down a date and time, then ask “what is my topic?” Then, you get a vague idea what the conference is about (if it has a theme) and propose a few titles for some possible talks. After some back and forth (usually the program committee approves or disapproves the talk title) it’s on.
The organizers suggested I do a talk on “Who Can You Trust?” – in the sense of what system vendors, systems, cloud services, were trustworthy. I suggested that I invert it and just do a quick walkthrough of organizations that can’t be trusted. Spoiler: none of them can be trusted.
In security, the term “trust” is a magic word. “Trust” only makes sense in terms of a threat model – what are you afraid of happening and what are its consequences? Once you have a threat model, then you can decide if the system is trustworthy in terms of that threat model. Traditionally, the internet security threat model has been “keep the hackers out.” but then around 1996 something started to change: there were foreign hackers. My suspicion is that the US government started worrying about having its systems compromised by other state powers at around the same time that it began successfully compromising the systems of other state powers. In general, when the US government starts loudly worrying that someone might do something to them it’s because they are doing it to someone else. In case that sounds like paranoid conspiracy theory, go look up the very credible accounts of the NSA’s compromising encryption systems from Crypto AG [wapo] – old cold war stuff. The CIA also ran successful operations producing subverted Xerox copy machines (which kept a copy of each copy) that were made available to the USSR. One of the seminal papers in computer security was Roger Schell’s paper on subversion,[schell] in which he proposed that systems be designed using trusted computer operating systems – software constructs that would monitor the hardware and prevent it from being subverted. [This triggered a long-lasting battle in security between those who believe that is a) possible, b) practical, c) capable of performing adequately, d) cost practical] Schell’s concern was the design of the communications network for the US Air Force – and whether or not Soviet agents would be able to build backdoors into it.
My talk is here:
It’s not the best talk I’ve ever given, but I think it’s a fair run-down of how bad the situation is. Most commercial systems are backdoored at least 2, probably 3 or more, ways. The CIA has its own malware and backdoor stacks, the NSA has their own, the FBI has a few and buys commercial implementations of backdoors and mandates backdoors under the PATRIOT act – and then there are the Israeli, Chinese, and probably Russian backdoors. It’s a miracle that anything works at all, though you can probably blame a lot of mysterious system slow-downs and restarts on prosumer malware doing bitcoin mining or hunting for credit cards or bank accounts.
The conclusion I reach at the end is bleak: the bar for computer security is low, but if we try to raise it, there are already forces in place that will moot the impact of any improvements we try to make. In other words, “computer security will be just as bad as it can possibly be, and no better” [-Nat Howard] That probably sounds extreme, but consider this: the deep subversion attacks are mostly latent – they are long-term backdoors that exist to re-establish a foothold in a system, once the attacker has been kicked out. See the problem? Suppose that attackers have 3 backdoors into your system and only use one all the time; by some quirk of fate you might discover it and block it, and if they still want to get into your system they might burn their second backdoor – presumably each backdoor has a different command set and control channel, designed to make it hard to detect. I have actually seen this in real life at one incident response I was involved with: my client detected malware on a system, and called for assistance to see what kind of data was being exfiltrated and to where. Upon closer analysis, and closing down the command channel for the malware, another command channel opened up over an LTE cellular signal a few days later. The signal was brief, then shut down, and the old malware started up again using a different network access method. Usually, an attack such as that would be considered sophisticated, and if the FBI or a government agency was involved in the investigation, they would make dark mumblings about “state-sponsored actors” but the fact is that it was all off-the-shelf malware such as you can buy in pen-testing tools such as a PwnPlug [ars] – it’s vastly more sophisticated than most network administrators expect to deal with, but it’s far down the power curve from some of the malware that the NSA brilliantly leaked.
The NSA’s malware (which they, no doubt, paid a pretty penny for) is pretty intense stuff. The tools that leaked were old but the NSA’s “equation group” [NSA X Group] controlled the entire software series of Flame/Duqu/and Stuxnet, which were implicated in the attacks on Saudi Aramco’s petroleum mining infrastructure, the attack on Iran’s uranium enrichment centrifuges at Natanz, and the generator failure of Iran’s nuclear reactor at Bushehr. Depending on who you talk to, that was an American operation using help from the Israelis, or the Israelis “went rogue” using NSA tools to launch attacks on their own. In either case, it’s cause for concern because that means NSA is sharing US government-developed malware with Israel. One of the more disturbing tools that leaked from the NSA’s collection was what appears to be a piece of malware that injects code into the bios of hard drives (most major manufacturers were represented). That’s a perfect illustration of the subversion principle: it doesn’t matter if the operating system tries to use its filesystem to delete malware, if the firmware of the hard drive makes those sectors invisible and remaps them except for at system boot. Another example is the subversion function in every Intel processor from 2000-2013: Intel “management engine” which is a separate CPU that has complete access to the network interfaces, the rest of the processor, system memory, and file systems. IME is not the only example of this sort of processor-based subversion, but it’s the sneakiest that has been discovered so far.
The US government is, naturally, terrified that China is preparing to do the same thing in return, when the US and its allies purchase 5G network gear from Huawei. Huawei’s stuff is compellingly better than US/UK/EU brands, and it’s cheaper, too. Not only this is a national security threat, it’s a threat to Cisco and Intel’s bottom line, too. The US is reacting like a narcissist who has been punched in the nuts, which is pretty much exactly what has happened.
I have been predicting this since the government started publicly talking about “information warfare” in the mid/late 90’s. When 9/11 happened, the intelligence community got a huge infusion of cash for “cyberdefense” and … I know that astute stderr readers can predict exactly what happened: they spent nearly all of it on offense. There is a long argument that I can make [and have made elsewhere, unfortunately it was deleted when I discovered that the site I published it on was fond of political equivocation and was run by a libertarian] that in cyberspace the best defense is a good defense and that any normal balance between offense/defense doesn’t hold because there is no battlefield. The only way a strong offense is a good defense is if you’re trying to terrify everyone, which is the US’ default strategy. As you can see, it has worked great on cybercrime.
If you catch the part about “offline attacks” against iCloud, please listen carefully to it and think really hard before you start jumping on me about how great Apple is. They have great marketing, nothing more.
I was shocked when I researched this talk and discovered that there are 15 million Office365 users and 50% (by some counts) of corporate sensitive data at rest is in Office 365. Since “cloud computing” became a thing, I have thought it was a dumb idea, except for in the case of a few limited client/server applications, but apparently the entire industry has collectively lost its mind – or I have.
Most Americans don’t know that Huawei is a conglomerate formed with the approval of the US government between the Chinese government (which created half of the company) and US computer device and communications company 3Com. 3Com once owned the small network interface card and hub business, as well as a goodly chunk of the modem business, but failed to migrate up the stack and were looking at becoming yet another failed silicon valley former giant, when someone had the brilliant idea of taking over the Chinese market by bringing US technology to a company that could provide cheap labor. Naturally, the Chinese government was happy to do this because it taught a whole generation of networking equipment makers, and included ready-made products that could be repackaged. Maybe Huawei is a double-reverse back-knuckle trojan horse. Maybe the whole thing is a CIA op.
Jörg says
Here is the video link, jumping directly to the beginning of Marcus’s talk:
aquietvoice says
Neat! It’s pretty fun to listen to these things like podcasts as I’m working on something else – learning even a little about a new field always brightens my day a little.
Question:
Does holding organizations to a higher standard of privacy mean we can force them to expend energy on parallel constructions and stuff?
In other words, if we can’t use denial-of-my-information attacks*, can we at least use they-have-to-use-more-energy attacks?
* Note: “denial-of-my-information attack” = writing anything down where it is secure from government access.
Note 2: My goodness whistleblower protections have never been more vital.
Marcus Ranum says
A longer comment on my own posting; is this in bad taste?
Even though the governments (the US and China most particularly) have crushed their citizens’ right to privacy, that’s not good enough for them. They have backdoors into everything and they refuse to acknowledge that it doesn’t work except for retroscopically. There’s something in the authoritarian mind that refuses to realize this, and refuses to listen. For just one example, the Boston Marathon bombers were not stopped or deterred in spite of the NSA being able to pinpoint them as suspects fairly quickly – because the bombs had already done their damage. If they bombers used any encryption, it didn’t matter whether the NSA or FBI was able to read their communications because:
Person1: “Hey what do you want to do Saturday?”
Person 2: “Let’s bomb the marathon.”
does not give an investigator predictive power. What it does is shows who, if anyone, else was in the conspiracy after the bombs go off.
There will always be occasional blockheads who will conspire openly, but usually with an undercover FBI plant. And those are fairly easy to catch and throw a bit of parallel construction over to hide the fact that they basically facebook messaged some stranger, “hey let’s do bombs.” “LOL ok.” The system is optimized to catch those numbskulls while it is admittedly helpless against anyone who knows what they are doing.
Meanwhile, the FBI (who are the laziest fucking secret police on Earth) keeps banging this tired old drum about “lawful access to keys” – i.e.: government mandated backdoors in encryption. Post 1950s, the NSA could no longer backdoor the encryption products-makers because crypto had switched to software which mutates on a rapid cycle, compared to microprocessors. NSA did some yoeman work backdooring commercial software crypto products (their biggest score was compromising RSA’s Bsafe implementation for nearly a decade of products) but there are too many. The FBI aren’t the only lazy fuckers out there. If you remember the Clipper Chip [NYT] it had hardware embedded in it that generated a spare key for the encryption, that went somewhere and made the data accessible to government agencies with an appropriate warrant. The hardware had some fancy protections to make it harder to grind the top of the chip off and read the data with an electron microscope, but NSA basically pulled the program in fear when folks like Ross Anderson began promising that not only would they do that, whatever it took, they would reverse-engineer the encryption algorithms the NSA was using in the chip, which were allegedly a watered-down version of military-grade crypto such as is used in the STU-III.
Anyhow, they keep whining about it: [Slate]
The cryptographers are correct. Backdoors such as “lawful access” and Intel Management Engine are disasters waiting to happen. So far, researchers (except probably in Israel, Russia and China) have not figured out how to remotely activate the IME backdoor but it represents a backdoor into every US government system built on Intel processors – what a bunch of chucklefucks. Here they are complaining that their systems are getting hacked a lot and they build a backdoor in, then get caught at it.
I am one of the rare computer security experts who openly admits to not knowing much about encryption techniques – I think of encryption as a tactical solution applied against a strategic problem, and I’m interested in the strategy. But I still have the distinction of being listed a co-author on a paper Carl Ellison presented at the rump session of Crypto (95?) based on an idea I dropped over lunch – and, it’s this: if you own the implementation of a public key system is it possible to develop an algorithm that is indistinguishable on the wire from a Diffie-Hellman or RSA key exchange? Suppose you’re NSA and you own the code in the chip, can you use an unpublished 256-bit secret to leak enough bits of the exchanged key into the exchange so that nobody except you can tell? Carl’s answer over lunch was “probably” and he went and had several such systems implemented by dinnertime. It’s such an obvious idea that I’d be shocked if nobody at NSA had it ages ago. So I assume that many key exchanges are compromised. Worse, the endpoint can be compromised if you’re using certificates, and the communications can be unrolled offline. There are some aspects of the whole digital certificate landscape that look suspiciously, to me, as though they were designed to come apart neatly in a retroscope. I was hoping Edward Snowden was going to disclose something like that, but he mostly focused on the big collection programs. Damn it. The scenario looks like this: suppose NSA has an undisclosed stack smashing hole in some popular web server software. Instead of smashing out to a shell (the usual attack) they have a piece of software that smashes the stack and pulls the unencrypted certificate out of process memory, exfiltrates it, and then goes back to behaving normally. At which point you harvest everyone’s certificates for years and unroll the crypto at your leisure. Supposedly NSA has a lot of crypto geniuses working for them (I have met some very smart people from the TLAs but I think they are a 1%er kind of minority) – this sort of thing would immediately occur to them.
Meanwhile, the fucksticks at FBI just keep trying to “cop on” and cannot get this idee fixe out of their minds.
Marcus Ranum says
aquietvoice@#2:
Does holding organizations to a higher standard of privacy mean we can force them to expend energy on parallel constructions and stuff?
Probably not. As we saw with the capture of the Silk Road guy, they aren’t willing to put a lot of work into having a plausible parallel construction. Make it harder and they’ll just stop trying, or they’ll whine more, but either way they won’t let us make it harder for them.
My goodness whistleblower protections have never been more vital.
I think it’s too late for that. In case nobody’s noticed, whistleblower protections amount mostly to painting a target on your back. What we really need is some heroes in the future to deliberately embed themselves in agencies so they can leak. [cyberinsurgency] That, by the way, would raise their personnel costs and internal security costs to mind-bending levels and would have a bigger impact than anything else we can do. Back in one of my talks at Black Hat, I mentioned this and got immediate pushback from some of the spooks in the room – “why do you hate America?” which I consider a sincere compliment coming from some of the people who were saying it.
We are fucked, basically. Their strategy all along has been to make it impossible to have privacy without being a target, and they have succeeded in that goal by throwing gigantic amounts of our money at thoroughly fucking us.
When you start to retire out of a field, you get into this “OK this is the last talk I’m going to do…” and sometimes you let it all hang out. Back in ’19 I did the closing keynote for ISSA’s world event, out in LA, and took everyone in the room to task: the computer security industry has built the chains that it’s going to have to wear – the people who implement this malware and design collection systems are software engineers who work for computer security companies. NSA asked us “what would you charge to design us a really great set of chains?” and there were a lot of people in the security industry who bid on that project. The field has corrupted itself.
cvoinescu says
That was a good talk. Thank you. A bit bleak, except the part about Huxley rather than Orwell.
That was utterly terrifying, given that it was clearly true. (And largely self-inflicted — or emergent, actually — with only minor encouragement from the government.)
I started woolgathering at some point, thinking about what it would take to build a reasonably secure smart grid. I still think it would be possible, if an organisation built the thing themselves, writing “virgin” code on bare metal on low-power microcontrollers (the kind where you can dissolve the package and put it under a microscope and check that the silicon does what it says on the tin, and does not have, um, bonus features), and staying away from ASICs and FPGAs and ZigBee and single-chip network stacks (like the WIZnet W5100 used in cheap IoT garbage).
Pierce R. Butler says
Recently I was invited to do a talk for the Minnesota ISSA … When you start to retire out of a field, you get into this “OK this is the last talk I’m going to do…” and sometimes you let it all hang out. Back in ’19 I did the closing keynote for ISSA’s world event, out in LA, and took everyone in the room to task…
Retirement – you haven’t pissed enough Important People off to be doing it right.
kestrel says
That was great. Totally not my field at all and not something I would normally think about, fascinating and scary too. I found the part about batteries really interesting; that makes so much sense.
Marcus Ranum says
@Pierce R. Butler: there are no important people.
dangerousbeans says
so when is the freethought blogs shuttle auction?
Marcus Ranum says
dangerousbeans@#9:
so when is the freethought blogs shuttle auction?
What kind of shuttle? I’m not the weaver here…
voyager says
It’s a subject I know very little about, but that was interesting. The implications are huge for political activism.