In the 2023 film Oppenheimer, during the Manhattan project to develop the nuclear bomb, one of the concerns was whether the nuclear explosion created during a test might create such high temperatures that it leads to the nuclei of nitrogen atoms in the atmosphere fusing together and triggering a chain reaction that essentially sets the atmosphere on fire, frying the entire planet. Oppenheimer tells general Leslie Groves, the director of the project, that the calculations of Arthur Compton showed that the chance of such a thing happening was less that three in a million, and thus acceptable. When Groves said that he was hoping that the answer would be zero, Oppenheimer replied that you could not expect such an answer from theory alone..
While the idea that theory can never give you absolute certainty about anything is correct, the actual story is more complicated. It turns out that the Oppenheimer-Compton story is based on an article written by Pearl S. Buck, based on an interview she had with Compton, and some of the details are apocryphal. Hans Bethe, head of the theoretical program at Los Alamos, who had shown how fusion reactions lay behind the energy production of stars, had concluded early on that the chance of a runaway fusion reaction igniting the air was so small as to not be worth worrying about.
But the idea of a three-in-a-million is an acceptable level risk for deciding to go ahead when something undesirable might happen (now referred to as the Compton number), has taken root and was recently invoked by computer scientist Max Tegmark in relation to whether the current AI efforts could escape from human control and lead to a runaway catastrophe, similar to the fears about nuclear consequences. He argues that calculations analogous to Compton’s should be done for AI before that work is taken further.
Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems. Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat. The US government went ahead with Trinity in 1945, after being reassured there was a vanishingly small chance of an atomic bomb igniting the atmosphere and endangering humanity.
In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the “Compton constant” – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be “slightly less” than one in three million.
Tegmark said that AI firms should take responsibility for rigorously calculating whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control.
“The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,” he said. “It’s not enough to say ‘we feel good about it’. They have to calculate the percentage.”
Tegmark said a Compton constant consensus calculated by multiple companies would create the “political will” to agree global safety regimes for AIs.
[Note that the newspaper article erroneously refers to the threshold as one in three million and not three in one million as Tegmark’s paper correctly says.]
As for the film Oppenheimer, it was a huge critical and popular hit, winning seven Academy Awards, including for best picture, best director, best lead actor (Cilian Murphy as Oppenheimer) and best supporting actor (Robert Downey Jr. as Lewis Strauss). It is long (around three hours) and I found the first half to drag. It had a lot of characters and complicated and interweaving story lines. Although I knew the general story and the names of the principals, and the science involved were all familiar to me (the only scientist named in the film whom I actually met in person was Hans Bethe), I found that the way the story was told, with its frequent jumping back and forth in time, to be confusing. I wondered how viewers without the benefit of a scientific background in that area made sense of it. I also found the ending of this part of the film, showing the triumphalism of the US and the cheering that accompanied the news of bombing of Hiroshima and Nagasaki, to be distasteful. The deaths of all those ordinary Japanese people will, or at least should, remain forever part of the collective guilt that all physicists have to live with.
The second half of the film was much better. It dealt with the political intrigues that led to the revocation of Oppenheimer’s security clearance and the way that Lewis Strauss, while pretending to be a disinterested person and even a supporter of Oppenheimer, schemed to take him down during the dark days of the McCarthy era.
The film was successful in showing the complex character of Oppenheimer. His sterling scientific credentials and his abilities as an administrator shepherding the work of so many arrogant scientists were never in doubt and he was highly respected by his peers, which is why he was able to recruit so many of the most eminent physicists to come to Los Alamos, as well as work elsewhere, on the Manhattan project. His political views and his willingness from time to time to compromise his principles, as well as his seeming naivete about what might be the consequences of his work, were well displayed in the film
Here’s the trailer.
A fun fact about Oppenheimer: Before WW2 he made a prediction about the possibility of neutron stars.
I recall it being considered that there was a similarly tiny chance that LHC would create a nanoscopic black hole… which, according to physics as we currently know them, the biggest threat isn’t that it would eat the Earth, but that it would explode in a burst of Hawking Radiation in the Earth’s Core after who knows how long. It doesn’t appear in retrospect that it was ever possible. There were also a bunch of conspiracy theories about the LHC having opened a portal to Hell, accidentally or on purpose, but those were easily ignored.
What can’t be so easily dismissed is the possibility of a “suicide pact technology” which wipes out any species who discover it. The atomic bomb wasn’t it. The LHC wasn’t it. AI probably won’t be it, though it could be. Sooner or later we might find that one of those “estimated three in a million chance that it could instantly or near-instantly wipe out humanity” concerns is the real deal the hard way, which is one reason why some people are so keen to create one or more off-world civilizations. Of course, for the billionaires pouring money into this, creating their own kingdom which is theoretically unaccountable to outside forces probably holds at least as much appeal.
Snowberry>
The combination of Capitalism, Industrialization, and Fossil Fuel use… :/
Not yet, anyway.
“Leading voice in AI safety” = “person who makes a living making up AI doomsday scenarios”. I see he’s also a mathematical platonist and an Effective Altruist, so that definitely gives me enormous confidence in his opinions.
Still, 90%, huh? I’d love to see the details for that… I’ll bet you lunch that it basically boils down to “we made up some numbers based on our assumptions, fed them into an equation we also made up, and it confirms our assumptions!” I mean, seriously, what is the theoretical underpinning for your AI threat model, given that we know absolutely nothing about what “advanced AI” might actually look like?
There’s an old joke about economists that goes: if an economist wanted to know about horses, he wouldn’t actually go and look at horses, he’d just sit in his study and think “what would I do, if I were a horse?” Well, “AI safety researchers” are exactly like that, except the thing they’re trying to imagine being doesn’t even exist.
We do at least know some things about horses. We don’t know anything about “advanced AI”.
Dunc, #5,
The other old economist joke is the one about the physicist, the chemist and the economist stranded on a desert island. A tin of soup washes ashore, but they can’t open it to get at the soup. The physicist says “let’s bash it open with a rock”, but they agree that plan might result in getting the soup everywhere.. The chemist says “let’s set a fire under it so the heat agitates the soup into blowing open the tin”, but that has the same issue. Then the economist pipes up and says “it’s easy, first, assume we have a tin opener…”.
Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.
IANAE, but I find it very hard to believe that one could use the same, or even similar, calculations to determine the danger posed by two very different things such as runaway nuclear fusion vs. runaway AI.
I also found the ending of this part of the film, showing the triumphalism of the US and the cheering that accompanied the news of bombing of Hiroshima and Nagasaki, to be distasteful.
Distasteful, but a real part of the history: many Americans really did react that way — as you’d expect people to react to any news of their/our side scoring a big win against an enemy — especially when they’re an enemy we’ve been at war with for a few years already. (And remember that the worst long-term effects of nuclear bombs didn’t become apparent, even to the scientists who had invented them, until after the war ended.)
All in all, it was a rather unpleasant film about a very unpleasant bit of our history. My own biggest complaint about it was the bits where Oppenheimer was imagining having sex with his girlfriend in front of whoever was questioning him about his sex life. I know it was meant to show how he felt having his private life questioned by strangers, but it got on my nerves and struck me as unnecessary. I don’t mind a good sex scene, just not in a movie that’s supposed to be about something entirely non-sexual.
At least with fusion they knew enough about the mathematics and physics to do a calculation. What would an AI calculation be based on? What are you even measuring?
Not a bad movie and considering it’s not about muscle people in perv suits the three hour time was quite a brave decision -- for me at least it didn’t drag at all.
Oppie was indeed a strange one: a normal ‘un would have told that committee to piss up a rope, he could have had any position in academia and maybe continue his astrophysics work, like BirgerJohansson says.
Also, his understanding of politics seemed quite limited: in an interview Emilio Segré recalled that at the start Oppie seemed convinced that Segré, an Italian, was inevitably going to be a big fascist (“era convinto che fossi un fascistone, come poteva!”) really strange as Segré had to get the hell out of Italy prontito.
Lastly about IA I REALLY feel that all this jazz about general, suprahuman machine intelligence is just ridicolous hype. Like Ahcuah points out, what’s there to measure? Nobody knows how our noggins work, how does one replicate an unknown, let alone improve it?
The usual chesty techbro crap, I’d say they are tarnishing their reputation but there’s nothing at all left to tarnish. Pathetic shills.
Lastly about [AI] I REALLY feel that all this jazz about general, suprahuman machine intelligence is just ridiculous hype.
I tend to agree. In the foreseeable future at least, the biggest threats we need to watch out for are not AIs oppressing or killing humans of their own volition; it’s humans using AI(s) to oppress or kill other humans. And the former is just a distraction by people looking to perpetrate the latter.
I’m dubious about the very idea that probabilities can usefully be assigned in such cases. In the case of igniting runaway fusion in the atmosphere, either this is physically possible, or it isn’t. There’s no sense I can see in which there’s a certain probability that it’s physically possible. Even in the case of runaway AI, we have to assume that this is or might be technologically achieveable -- that if a “mad scientist” wanted to achieve this and had sufficient resources, they could or might be able to produce one. Once we make such an assumption, the important question is “How do we ensure it doesn’t happen?”. To which the answer seems to be -- first ensure the right sort of people are in charge of governments and able to control tech corporations, then negotiate an international agreement. We seem to have some way to go to achieve that.
KG,
You are right that it is either possible or it isn’t. But assuming it is possible does not mean that it is inevitable.
I do not know what exactly went into the probability calculations, but here is a rough guess.
Any reactions in quantum mechanics are probabilistic. To get nuclear fusion reactions, you need to reach a certain temperature threshold to overcome the Coulomb repulsion forces. But even below that temperature, one could get quantum tunneling through the Coulomb barrier and that involves probabilities, with the probability of tunneling rising as the temperature rises.
To add to that you would need to see whether there could be a chain reaction, where some fusion causes the temperature to rise locally which increases the probability of other nuclei fusing, and so on.
This is entirely a guess on my part but the idea of computing a probability seems reasonable.
Mano @12: Yeah, you’d have to calculate cross sections for high-energy particle collisions (in the air, that would be mostly nitrogen-nitrogen collisions), which are inherently probabilistic. Since high energy theory back then was patchy at best, assumptions had to be made to come up with ‘reasonable’ cross sections.
One of the simplest arguments against the idea of a runaway chain reaction setting the atmosphere on fire is basically “If a chain reaction like that were possible, why hasn’t it happened already?” We know that there are natural chain reactions happening (there are at least a couple of uranium mines which are pretty much natural reactors).
This is an even better argument against the ‘LHC will create black holes’ idea, as we can point to cosmic ray particles that show up with a whole lot more energy than the LHC can generate, and so if those weren’t creating black holes, the LHC can’t.
Unfortunately since there hadn’t been a natural fission reaction as intense as the early atomic bombs it’s harder to definitively rule out the possibility, but it still seems likely that even if a chain reaction were possible to start, it would be self-limiting as a result of the general diffuseness of the atmosphere. The shockwave cools as it spreads outward and gets larger, after all.
And now we’re away from the realm of science and into engineering.
First: “if it’s possible, why hasn’t it happened already?” is not a useful question, because the answer could very easily be “the conditions weren’t perfect, or close to it”. It’s possible to lose hydraulic pressure in the control system for an airliner’s control surfaces, in which case your stick is dead. Why hasn’t it happened already? Well, it has. That’s why airliners have two entirely redundant hydraulic systems, in case one leaks, then you’ve still got a backup. And both have monitoring systems so if there’s a slow leak, you spot it in good time and can limp to safety. Ah, you might say, but what if you lose ALL pressure in BOTH systems AT THE SAME TIME? Which sounds like a ridiculous question because in what scenario is that even possible? Google “United Airlines Flight 232” for details of when that happened.
The point being: lots of things that CAN happen don’t… until they do. And bear in mind that the Manhattan project was messing around with things that very definitely had never happened before anywhere on earth. Yes, there are natural nuclear reactors, but they’re underground and slow and quiet and all sorts of other things that Trinity was not.
The other thing to consider is the risk profile. This consists of two things you have estimate:
(1) how likely is this thing to happen?
(2) how bad is the consequence of it happening?
Normally in my professional life, things for (1) are measured on a scale based on frequency, like “it’ll definitely happen multiple times per year”, for something like “bloke hits the wrong button, or fails to hit the button, for some reason” down to “it’s likely to happen once in a hundred thousand years ” (half a dozen unlikely events line up and a bad thing happens).
Things in slot (2) are measured on a scale going from “chap gets startled by a loud noise”, through “chap loses a finger”, up through “couple of chaps get killed” to “everyone in the housing estate 400m downwind dies in their beds from the gas”. We don’t, generally, need to consider consequences much larger than that -- for a chemical plant, 50 fatalities is functionally the same as 10,000. Your plant is closing, everyone’s losing their job and someone’s going to jail either way.
“The atmosphere burning” may very well have a low, low figure is slot (1), since it hasn’t happened at any point in the last 4 billion years or so… but what figure in slot (2) would you put on the “immediately killing every single thing on earth and rendering it uninhabitable forever”?
I can see why they thought about it really hard, is what I’m saying -- however unlikely it was to happen, the consequences is unimaginably huge.
I, with my very basic physics knowledge, know that fission explosions are used to trigger the fusion of deuterium and tritium. But it never occurred to me to wonder if a regular fission explosion (that is, one that is not part of a fusion bomb) might in fact trigger some fusion reactions.
I am pretty sure that this is not the case, because someone would have put it into a science article about them (“fission explosions get an extra kick from additional fusion reactions triggered by the fission”). But I’ll leave it as a question anyway: are there fusion reactions triggered by a typical (or even a non-typical) fission detonation?
Owlmirror,
No, it would not. In regular nuclear fission, a large nucleus splits up into smaller nuclei and releases energy and some neutrons. These neutrons could not cause nearby nuclei to fuse together.
For a more detailed explanation, see here.
I should have read the linked article by Philip Ball . . .
Document LA-602 is available in several places online, including here:
https://blog.nuclearsecrecy.com/wp-content/uploads/2018/06/1946-LA-602-Konopinski-Marvin-Teller-Ignition-fo-the-Atmsophere.pdf
Konopinski, E. J.; Marvin, C.; Teller, E.
August 14, 1946.
Ignition of the atmosphere with nuclear bombs
Abstract: (NB -- the text is partly from OCR, which was very poor due to the poor quality of the text. I tried to correct it, and type some missing parts myself. Errors may have crept in )
This also came up: Chung, Dongwoo. (The Impossibility of) Lighting Atmospheric Fire. 2015-02-16
https://web.archive.org/web/20151009014631if_/https://large.stanford.edu/courses/2015/ph241/chung1/