All these raving mad techbro loonies keep ranting about how AI, unless properly nurtured (and paid for), might lead to extinction, and how AI ought to be a high priority for humanity (meaning “give us money), and it’s confusing, because they use words differently than normal people. In particular, the word “extinction” means something very different from what a biologist might understand it to mean.
When TESCREALists [transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism] talk about the importance of avoiding human extinction, they don’t mean what you might think. The reason is that there are different ways of defining “human extinction.” For most of us, “human extinction” means that our species, Homo sapiens, disappears entirely and forever, which many of us see as a bad outcome we should try to avoid. But within the TESCREAL worldview, it denotes something rather different. Although there are, as I explain in my forthcoming book, at least six distinct types of extinction that humanity could undergo, only three are important for our purposes:
Terminal extinction: this is what I referenced above. It would occur if our species were to die out forever. Homo sapiens is no more; we disappear just like the dinosaurs and dodo before us, and this remains the case forever.
Final extinction: this would occur if terminal extinction were to happen — again, our species stops existing — and we don’t have any successors that take our place. The importance of this extra condition will become apparent shortly.
Normative extinction: this would occur if we were to have successors, but these successors were to lack some attribute or capacity that one considers to be very important — something that our successors ought to have, which is why it’s called “normative.”
The only forms of extinction that the TESCREAL ideologies really care about are the second and third, final and normative extinction. They do not, ultimately, care about terminal extinction — about whether our species itself continues to exist or not. To the contrary, the TESCREAL worldview would see certain scenarios in which Homo sapiens disappears entirely and forever as good, because that would indicate that we have progressed to the next stage in our evolution, which may be necessary to fully realize the techno-utopian paradise they envision.
I think maybe “we” and “our” might mean something different to them, too, because the words don’t include me or my family or my friends or even distant acquaintances. Heck, they probably don’t include most of the life on this planet.
Later in his book, MacAskill suggests that our destruction of the natural world might actually be net positive, which points to a broader question of whether biological life in general — not just Homo sapiens in particular — has any place in the “utopian” future envisioned by TESCREALists. Here’s what MacAskill says:
It’s very natural and intuitive to think of humans’ impact on wild animal life as a great moral loss. But if we assess the lives of wild animals as being worse than nothing on average, which I think is plausible (though uncertain), then we arrive at the dizzying conclusion that from the perspective of the wild animals themselves, the enormous growth and expansion of Homo sapiens has been a good thing.
The lives of wild animals as being worse than nothing on average
…who assesses that “worse”? People? TESCREALists? I was just watching an adorable little Theridion constructing a cobweb in a signpost — what was “worse” about that? It’ll probably thrive all summer long and leave behind a family of spiderlings who I’ll see building cobwebs next summer.
I don’t think the monarch butterflies and mayflies consider the expansion of Homo sapiens to be a good thing either — they’re dying and declining in numbers. Were passenger pigeons grateful for what we brought to them? I think MacAskill is playing a weird numbers game here. He thinks he can arbitrarily assign a value to an organisms life, either negative or positive or “average” (relative to what, I have no idea), and if it’s less than zero…pffft, it’s OK to exterminate them.
People who think that way about animals tend to eventually do the same thing to people, you know.
So where does this leave us? The Center for AI Safety released a statement declaring that “mitigating the risk of extinction from AI should be a global priority.” But this conceals a secret: The primary impetus behind such statements comes from the TESCREAL worldview (even though not all signatories are TESCREALists), and within the TESCREAL worldview, the only thing that matters is avoiding final and normative extinction — not terminal extinction, whereby Homo sapiens itself disappears entirely and forever. Ultimately, TESCREALists aren’t too worried about whether Homo sapiens exists or not. Indeed our disappearance could be a sign that something’s gone very right — so long as we leave behind successors with the right sorts of attributes or capacities.
Again, the extinction they speak of is not the extinction we think of. If their strategies lead to the death of every person (and animal!) on the planet, but we leave behind blinking digital boxes that are running simulations of people and animals, that is a net win.
I’m beginning to worry about these people. If I assign them a value of -1, will they all conveniently disappear in a puff of smoke?