The latest from Émile Torres focuses on how longtermists have effectively focused on PR and advertising. They have a truly odious philosophy, so they emphasize whatever element will get them the most money. The core of longtermism is the idea that in the far future there could hypothetically be many, many trillions of hypothetical “people” (who would mainly be artificial intelligences of some sort), and that therefore we should make any contemporary sacrifice we can to maximize the population of machines in the unimaginably distant future. There’s a lot of weebly-wobbly rationalizations to be made since nobody has any idea what strategies now will lead to conquest of the galaxy for human-made computers in some imaginary fantasy future, but somehow the current popular ones all involve sucking up to disgustingly rich people.
Ironically, it grew out of the goal of ending world poverty.
Longtermism emerged from a movement called “Effective Altruism” (EA), a male-dominated community of “super-hardcore do-gooders” (as they once called themselves tongue-in-cheek) based mostly in Oxford and the San Francisco Bay Area. Their initial focus was on alleviating global poverty, but over time a growing number of the movement’s members have shifted their research and activism toward ensuring that humanity, or our posthuman descendants, survive for millions, billions and even trillions of years into the future.
If you asked me, I would have thought that building a stable, equitable base would have been a sound way to project human destiny into an unknowable future, but hey, what do I know? The longtermists gazed into their crystal ball and decided that the best, and probably most lucrative, way to defend the future was to pander to the elites.
Although the longtermists do not, so far as I know, describe what they’re doing this way, we might identify two phases of spreading their ideology: Phase One involved infiltrating governments, encouraging people to pursue high-paying jobs to donate more for the cause and wooing billionaires like Elon Musk — and this has been wildly successful. Musk himself has described longtermism as “a close match for my philosophy.” Sam Bankman-Fried has made billions from cryptocurrencies to fund longtermist efforts. And longtermism is, according to a UN Dispatch article, “increasingly gaining traction around the United Nations and in foreign policy circles.”
After all, haven’t billionaires already proven that they will do their all to spread their wealth? OK, maybe the past is a poor guide, but once they’ve perfected brain uploading and have a colony of serfs on Mars, then they’ll decide to let the rest of us have a few crumbs.
The article is largely about one guy, MacAskill, who is the current Face of the movement. His entire career is one of lying to make his philosophy palatable to the masses, but especially delicious to wealthy donors. From day one he was shaping the movement as manufactured public relations.
But buyer beware: The EA community, including its longtermist offshoot, places a huge emphasis on marketing, public relations and “brand-management,” and hence one should be very cautious about how MacAskill and his longtermist colleagues present their views to the public.
As MacAskill notes in an article posted on the EA Forum, it was around 2011 that early members of the community began “to realize the importance of good marketing, and therefore [were] willing to put more time into things like choice of name.” The name they chose was of course “Effective Altruism,” which they picked by vote over alternatives like “Effective Utilitarian Community” and “Big Visions Network.” Without a catchy name, “the brand of effective altruism,” as MacAskill puts it, could struggle to attract customers and funding.
It’s a war of words, not meaning. The meaning is icky, so let’s plaster it over with some cosmetic language.
The point is that since longtermism is based on ideas that many people would no doubt find objectionable, the marketing question arises: how should the word “longtermism” be defined to maximize the ideology’s impact? In a 2019 post on the EA Forum, MacAskill wrote that “longtermism” could be defined “imprecisely” in several ways. On the one hand, it could mean “an ethical view that is particularly concerned with ensuring long-run outcomes go well.” On the other, it could mean “the view that long-run outcomes are the thing we should be most concerned about” (emphasis added).
The first definition is much weaker than the second, so while MacAskill initially proposed adopting the second definition (which he says he’s most “sympathetic” with and believes is “probably right”), he ended up favoring the first. The reason is that, in his words, “the first concept is intuitively attractive to a significant proportion of the wider public (including key decision-makers like policymakers and business leaders),” and “it seems that we’d achieve most of what we want to achieve if the wider public came to believe that ensuring the long-run future goes well is one important priority for the world, and took action on that basis.”
Yikes. I’m suddenly remembering all the atheist community’s struggling over the meaning of atheist: does it mean a lack of belief in gods, or does it mean they deny the existence of gods? So much hot air over that, and it was all meaningless splitting of hairs. I don’t give a fuck about what definition you use, and apparently that means I’m a terrible PR person, and that’s why New Atheism failed. I accept the blame. It failed because we didn’t attract enough billionaire donors, darn it.
At least we didn’t believe in a lot of evilly absurd bullshit behind closed doors that we had to hide from the public.
The importance of not putting people off the longtermist or EA brand is much-discussed among EAs — for example, on the EA Forum, which is not meant to be a public-facing platform, but rather a space where EAs can talk to each other. As mentioned above, EAs have endorsed a number of controversial ideas, such as working on Wall Street or even for petrochemical companies in order to earn more money and then give it away. Longtermism, too, is built around a controversial vision of the future in which humanity could radically enhance itself, colonize the universe and simulate unfathomable numbers of digital people in vast simulations running on planet-sized computers powered by Dyson swarms that harness most of the energy output of stars.
For most people, this vision is likely to come across as fantastical and bizarre, not to mention off-putting. In a world beset by wars, extreme weather events, mass migrations, collapsing ecosystems, species extinctions and so on, who cares how many digital people might exist a billion years from now? Longtermists have, therefore, been very careful about how much of this deep-future vision the general public sees.
The worst part of longtermist thinking is that what they’re imagining, in the long term, is a swarm of digital people — none of whom exist now, and which we don’t know how to create — is the population that our current efforts should be aimed at serving. Serving. That’s a word they avoid using, because it implies that right now, right here, we are the lesser people. Digital people is where it’s at.
According to MacAskill and his colleague, Hilary Greaves, there could be some 1045 digital people — conscious beings like you and I living in high-resolution virtual worlds — in the Milky Way galaxy alone. The more people who could exist in the future, the stronger the case for longtermism becomes, which is why longtermists are so obsessed with calculating how many people there could be within our future light cone.
They’ve already surpassed the Christians, some of whom argue that there are more than 100 million (100,000,000) angels
. The needs of the many outweigh the needs of the few, remember, so sacrifice now to make your more numerous betters.
You will also not be surprised to learn that the current goal is to simply grab lots and lots of money by converting rich people to longtermism — this is also how Christianity succeeded, by getting a grip on the powerful and wealthy. Underdogs don’t win, except by becoming the big dogs.
So the grift here, at least in part, is to use cold-blooded strategizing, marketing ploys and manipulation to build the movement by persuading high-profile figures to sign on, controlling how EAs interact with the media, conforming to social norms so as not to draw unwanted attention, concealing potentially off-putting aspects of their worldview and ultimately “maximizing the fraction of the world’s wealth controlled by longtermists.” This last aim is especially important since money — right now EA has a staggering $46.1 billion in committed funding — is what makes everything else possible. Indeed, EAs and longtermists often conclude their pitches for why their movement is exceedingly important with exhortations for people to donate to their own organizations.
One thing not discussed in this particular article is another skeevy element of this futurist nonsense. You aren’t donating your money to a faceless mob of digital people — it’s going to benefit you directly. There are many people who promote the idea that all you have to do is make to 2050, and science and technology will enable an entire generation to live forever. You can first build and then join the choir of digital people! Eternal life is yours if you join the right club! Which, by the way, is also part of the Christian advertising campaign. They’ve learned from the best grifters of all time.
raven says
Their marketing isn’t that good.
I’m well informed (or at least read the news and spend time on the internet) and the first I’ve heard of them is on Pharyngula a few weeks ago.
The long termists looked like yet again another group of crackpot trolls and I just skipped over them.
I refuse to care about 10exp45 imaginary digital people who might exist in some far future halfway across the galaxy.
That isn’t the world I live in.
We have to take care of the now and the short term or there won’t even be a long term.
birgerjohansson says
When Stanislaw Lem wrote the first short story about digital sapient entities, he certainly never envisioned something like long-termists.
If he had, they would have featured as bizarre characters in satirical antholigies like The Cyberiad or The Star Diaries, alongside people like the “anti-rumpists”* or robots chasing virtual dragons.
*extreme body modification fashion in the next millennium
jo1storm says
So, basically, a cult? Prophecies, promises of heaven and hell (Rocko’s basilisk)… “You don’t get rich writing science fiction. If you want to get rich, you start a religion.”
PZ Myers says
I have been informed that the Necron Empire features these things: The Canoptek Spyder. That’s all it takes, I’m on their side now.
PZ Myers says
And yes, it’s definitely a cult.
Corey Fisher says
I had a friend who was deeply into EA for a long time (she moved out to the Bay Area to live on a boat so she could Be There, Where It Was Happening). She stuck by the principles about trying to maximize charity for the rest of an unfortunately short life, but well before the end she discarded the EA community and label. This PR stuff is why.
“We’re the people whose goals are to save the world, so it’s the best if everyone wants to support us. Since it’s best if everyone wants to support us, it’s best for everyone if nothing comes out against that.” How I heard it was that a big brouhaha erupted in the community over this, once it turned out that many people in the community were abusing it, that some of their techniques simply didn’t work, or that data on the effectiveness of different charities (a big idea was “where can you put your donations to do the most good”) was just manipulated… and well-respected people knew about this, and hid it, because if people knew EA was failing then they wouldn’t want to be part of EA and that would mean fewer people doing The Good Thing, which is being in their community, because by their nominal goals they are automatically The Good People.
I don’t mind the idea of trying to think about doing good. But these people could benefit from some self-reflection and epistemic humility.
Pierce R. Butler says
… EA has a staggering $46.1 billion in committed funding …
And just what will they do in the 21st century with all that money (besides claiming the acronym of Electronic Arts™)?
dWhisper says
The Necrons also killed and or turned their gods into weapons, because they were just trouble. So… there’s that. And one of them basically plays Pokemon with the entire universe. Way cooler than the bullshit peddling longtermists.
Marcus Ranum says
How will we tell a digital person from a simulation of a person? I can ask the thing “who are you?” and it can reply it’s me, but it’s not because I’m me and I’m not a digital simulation – I’m a meat robot.
The interesting moment, to me, is when the original/source is turned off. I.e: the human is killed. There’s still a human dying and the meat robot is programmed to survive.
Akira MacKenzie says
I’m still rooting for the T’au.
StevoR says
The Necon-Oh-my-Con?
@9. Marcus Ranum : Turn off the power & see which one is still there?
Or reach out an, reach out an reach out touch somebody and if there is body they are physically human (homo sap)& they are likely meat person not hologram / simulation and Berkely’s “I refute it thus!” kicking at a stone remark which works for a certain degree of verisimilitude in currently possible tech?
grahamjones says
“You won’t mind my calling you Comrade, will you? I’ve just become a socialist. It’s a great scheme. You ought to be one. You work for the equal distribution of property, and start by collaring all you can and sitting on it.”
― P.G. Wodehouse, Mike and Psmith
cormacolinde says
The most baffling to me is why do they care about maximizing the number of fantasy people so much? Why are larger numbers supposed to be better? If all we cared about numbers, shouldn’t we maximize the number of ants on Earth instead of humans? The planet can certainly support more ants than humans.
Pierce R. Butler says
… a controversial vision of the future in which humanity could radically enhance itself, colonize the universe and simulate unfathomable numbers of digital people in vast simulations running on planet-sized computers …
Ayn Rand plagiarizing Olaf Stapledon, without the drama or vision of either.
cartomancer says
The other thing to bear in mind about the Necrons, apart from that most of them are quite, quite mad after millennia of hardware degradation, is that the few who aren’t are trying desperately to find a remedy for their soulless, robotic condition. Those that still have a sense of self don’t really want to be robotic artifiical intelligences. Who, one wonders, actually would.
Except Orikan the Diviner. He is quite happy abusing his secret and supremely powerful time travel abilities just to make his crappy fortune-telling come true.
charlesanthony says
I, for one,welcome our new digital overlords.
cubist says
Being concerned over how things are gonna be in the future is a good thing. I mean, not being concerned about future conditions is how you end up with global warming and rivers on fire, you know? Being concerned about future conditions which may or may not be wholly hypothetical/fictional… yeah, that’s not so good…
PZ Myers says
#15: Yeah, but…robot spiders.
birgerjohansson says
If you have that AI capacity, I am totally stealing it and making 100,000,000 clones of Agent Smith!
Hello, mr Anderson…
birgerjohansson says
If PZ likes robot spiders I know a SF novel by wossname (must look it up, it was a few years ago).
birgerjohansson says
Spiders.
Timothy Zahn has written the Frank Compton/quadrail SF series with spider robots working for aliens, in an interstellar transport network.
ORigel says
@13
Longtermists are utilitarians. Utilitarians want to bring about the most good. Everyone’s moral systems should be influenced by Utilitarianism, in my opinion, but then there is extreme Utilitarianism.
Extreme Utilitarians see sentient beings as containers for “utility points” (the units of goodness– often pleasure). Their goal is to maximize the number of utility points. Utility points need to exist in a sentient being, so to Extreme Utilitarians, the more sentient beings the better. A simulation of 10^43 humanlike beings in orgasmic bliss in the far future would far outweigh the moral significance of real people in the here and now. And the way to serve them is to give tons of money to longtermist grifters promoting Singularity bullshit (the techno-Rapture).
Some Extreme Utilitarians don’t count suffering as negative utility points, so a trillion people being tortured is better than a million happy people. “Just shut up and multiply!”* 0.1 utility points in a trillion containers makes a 100 billion utility points, while 100 utility points in a million containers makes only 100 million utility points.
*From the Longtermist Eliezer Yudkowsky’s conception of ethics
shermanj says
This digital eternal life was the exact subject of an X-files show from a few years ago aired last night. As this one will be, that one was corrupt, deceitful and made slaves of many of those ‘uploaded’. Yes, this is all smoke and mirrors (and a grand scam) from the obscenely wealthy. We can’t even keep the sheople safe on the internet today. How will they make their ‘megalomaniacal metaverse’ safe? – – Billions of angels dancing on a ‘virtual’ head of a pin? WTF
ORigel says
@17
Longtermists say they care about the needs of a billion billion trillion imaginary people over the needs of eight billion people who actually exist, and their descendants who will definitely exist soon. They don’t care about climate change much– only to the degree it imperils the potential existence of 10^40 simulated beings.
ORigel says
@23
I really hope that mind uploading is a pipe dream, for that very reason.
snarkrates says
These guys remind me a little of the rookie driver who looks so far down the road that they slam into the parked car at 50 miles an hour. Except it’s worse, because they’re the ones who parked the car there in the first place. They are rich because of capitalism, and capitalism is why in a couple hundred years humans will be surviving as small bands of hunter-gatherers on the margins of what used to be our civilization.
springa73 says
Effective altruism and longtermism seem particularly insidious to me because they include some ideas that I find appealing. Like cubist @17 says, being concerned about the long term is good in many ways. As someone who considers myself a humanist, a long and successful and generally happy future for humankind is certainly something that I would support. Some of the things that these people are extrapolating from these (good) basic principles are, however, just bonkers. Even if a future like they describe, with vast numbers of sentient beings descended from humanity living across many galaxies (virtually or physically), is even possible, it’s just much too speculative and too far into the future to be used as a guideline for current actions and policy. There are many huge challenges that humanity faces in the coming decades and centuries, and dreams about vast numbers of virtual beings living across galaxies aren’t going to be that helpful in solving those problems.
jenorafeuer says
shermanj@23:
Back in 1987, Max Headroom had an episode (‘Deities’) about a similar prospect, a church that uploaded the personalities of its members in preparation for a coming resurrection. It was, of course, all a scam: the upload technology wasn’t actually good enough to capture anything more than a few surface bits that were enough to use for an publicity campaign to get more people to sign up.
dangerousbeans says
Worrying about people who might exist does seem to be a great way to get out of worrying about people who do exist. It works for the anti-abortionists too.
John Morales says
https://www.schlockmercenary.com/2017-07-27
unclefrogy says
if they actually believe that I assume some do what is so attractive about “life” without a real flesh and blood body here and now?
if they do not actually believe then they are just another variety of parasites feeding on the gullible with delusions making up a religion without gods but with virtual eternal life.
unclefrogy says
If we do not “solve” the problems of living here in the now there will not be any descendants in the far far future to worry about
birgerjohansson says
Depressing; in the recent Swedish election, climate change was absent from the debate.
StevoR says
@17. cubist :
Quoted for truth and very neatly summed up. Ditto #29. dangerousbeans : Worrying about people who might exist does seem to be a great way to get out of worrying about people who do exist. It works for the anti-abortionists too.
@12. grahamjones : What has this got to do with either socialism or P.G. Wodehouse here by Jeeves?
Marissa van Eck says
The thought occurs: the next step after coming up with Roko’s Basilisk is to defend yourself from the Basilisk by becoming the Basilisk.
John Morales says
Marissa, I reckon the Eschaton will preempt the Basilisk.
No pointless torture, because:
“I am the Eschaton; I am not your God.
I am descended from you, and exist in your future.
Thou shalt not violate causality within my historic light cone. Or else.”