Émil Torres explores the relationship between long-termism/effective altruism and scientific racism.
longtermism, which emerged out of the effective altruism (EA) movement over the past few years, is eugenics on steroids. On the one hand, many of the same racist, xenophobic, classist and ableist attitudes that animated 20th-century eugenics are found all over the longtermist literature and community. On the other hand, there’s good reason to believe that if the longtermist program were actually implemented by powerful actors in high-income countries, the result would be more or less indistinguishable from what the eugenicists of old hoped to bring about. Societies would homogenize, liberty would be seriously undermined, global inequality would worsen and white supremacy — famously described by Charles Mills as the “unnamed political system that has made the modern world what it is today” — would become even more entrenched than it currently is.
I would have predicted the connection long ago. EA trips a whole bunch of red flags in my head.
- The incessant chatter about IQ. We don’t know what IQ is, other than a number generated by an IQ test, so making the concept central to your philosophy is a bit like building your reason for living on phrenology. Sure, you can actually measure the bumps on your skull and use scientific-looking tools like calipers and quantitatively calculate their dimensions, but does it mean anything about how your brain works? No, it does not. At the first mention of IQ, run away.
-
The lack of relevant qualifications. Look at the big guns of EA: Bostrom, MacAskill, Yudkowsky, Alexander, Hanson (I’ll even toss in Sam Harris, although he doesn’t seem to be deeply involved in EA). Do any of them have any background in genetics at all? They do not. Yet they go on and on about dygenesis and eugenesis and trends in populations that have to be countered, or they defend Charles Murray’s (also not a geneticist) racist interpretations of traits of whole populations. This problem goes all the way back to the founders of the eugenics movement, who weren’t geneticists at all, like Francis Galton, or immediately used the crudist, most primitive forms of Mendelism to justify bad science, like Davenport.
-
Transhumanism as a tool for improving humanity. I have some sympathy for the idea of modifying genes and bodies by individuals; that’s a fine idea, I wish we had greater capabilities for that. Where I have problems is when it’s seen as a method of social engineering. Underlying it all is a set of value judgments defining how we should regard diversity in our fellow human beings. If you’re arguing we ought to use gene therapy or drugs to eradicate obesity, or autism, or color-blindness, or whatever, you’ve already decided that a whole lot of existing attributes of the human population are dysgenic or undesirable, yet you don’t know what all the correlates of those traits might be. You’re also viewing those people through a lens that highlights everything about them that you personally consider bad.
The thing is, we’re all born with a range of traits that are basically random, within certain limits. Everything about you, all 20,000 genes, is a roll of the dice. A philosophy that does not insist that every combination deserves equal respect, equal justice, and equal compassion is an anti-human philosophy, because it denies a fundamental property of our biology.
Those are just the general red flags that can be thrown by a whole suite of common ideas. EA throws one that I would never have imagined anyone would take seriously, this bizarre idea that we ought to consider the happiness of hypothetical, imaginary human beings far more important than the happiness of real individuals in the here and now. I can’t even…this is crazy cultist bullshit. I do not understand how anyone can fall for it. Except…yeah, they’re using the universal excuses of the modern Enlightenment.
And no one should be surprised that all of this is wrapped up in the same language of “science,” “evidence,” “reason” and “rationality” that pervades the eugenics literature of the last century. Throughout history, white men in power have used “science,” “evidence,” “reason” and “rationality” as deadly bludgeons to beat down marginalized peoples. Effective altruism, according to the movement’s official website, “is the use of evidence and reason in search of the best ways of doing good.” But we’ve heard this story before: the 20th-century eugenicists were also interested in doing the most good. They wanted to improve the overall health of society, to eliminate disease and promote the best qualities of humanity, all for the greater social good. Indeed, many couched their aims in explicitly utilitarian terms, and utilitarianism is, according to Toby Ord, one of the three main inspirations behind EA. Yet scratch the surface, or take a look around the community with unbiased glasses, and suddenly the same prejudices show up everywhere.
“Science,” “evidence,” “reason” and “rationality” are supposed to be tools to help lead you to the truth, but it’s all too easy to decide you already possess the truth, and then they transform into tools for rationalization. That’s not a good thing. You can try to rationalize any damn fool nonsense, and that’s the antithesis of the scientific approach.
Strewth says
The biggest hole I see in the “We must adopt an ethics where the people now sacrifice for the happiness of future people” idea is that when these future people exist, they will live in a culture that tells them to sacrifice their happiness for yet farther in the future people.
All it produces is a larger number of unhappy people.
feralboy12 says
What if the people of the future are a bunch of jerks? We’d be sacrificing to maximize the happiness of a bunch of jerks.
StevoR says
Its one thing to consider the future in what you choose to do, its a whole other thing again to sacrifice other people in the right now for it..
Its also kinda ..weird? Ironic? Odd? Telling? How little those pushing eugenics for the sake of imagined future people seem to wnat to be doing and acting when it comes to Global Overheating and the Anthropocene Mass Extinction and Biodoversity Loss and habitat destruction that is liekly going toaffect that distant future alotr mroe obviously and badly and needs acting on right now more clearly..
raven says
What we do know is that IQ is very plastic and malleable.
Meaning a huge number of variables can change scores on IQ tests for whole populations.
I’ve dealt with this before so will just repost it once again.
IQ is very malleable and changeable and there is a heritable component but it isn’t all that high.
In Realityland, if you want to look at racial/ethnic groups that almost always score low on IQ tests, that is…the Irish. Historically, the Irish have been the poster group for dumb people.
https://www.theamericanconservative.com/raceiq-irish-iq-chinese-iq/
Well, OK the Irish for many decades scored low on IQ tests.
This rapid convergence between Irish and British IQs should hardly surprise us. According to the GSS, the Wordsum-IQs of (Catholic) Irish-Americans rank among the very highest of any white ethnic group, with a value almost identical to that of their British-American ethnic cousins.
It turns out that lately, Irish IQs have been rising and are now equal to the British. They are equal to WASPs in the USA.
The point that can easily be made over and over again is that IQ is very changeable and mallable.
It is effected by many variables such as socio-economic status, early childhood nutrition, early childhood upbringing, education quality, culture, and so on.
We can say that there is a heritable component to IQ but it isn’t really all that high.
timgueguen says
Strewth@1 you can be sure that the people at the top of any such system will sacrifice less than everyone else, as usually happens in authoritarian/totalitarian systems.
raven says
Yeah, I don’t get this one either.
A huge number of mostly white males seem to find it very important to say over and over again, “Blacks score low on IQ tests.”
Well, so what.
Historically, a huge number of groups have also scored low on IQ tests.
.1. Half of all white Americans score low on IQ tests since that is the median.
So not all whites are dumb, just half of them.
.2. The Irish are the poster people for scoring low on IQ tests and being dumb.
Except these days, the Irish now score equal to British and American whites and are now honorary white people.
.3. In the past, groups that scored low on IQ tests and were considered inferior humans include Greeks, Italians, Spanish, Portuguese, Germans, Jews, Slavs, and Mexicans.
If it is a dodgy metric to quote in the first place and you have no intention of doing anything about it, then why spend a huge amount of time mentioning it?
The mass murder shooter in Buffalo New York who targeted Blacks said he did it because he was a racist white supremacist.
That is doing something about it, I suppose, but mass murder and genocide is something we avoid these days. We aren’t Russians, we are better than that.
What we actually can do, should do, and in some cases actually do is provide help and interventions for disadvantaged children of any ethnic background, because we know IQ is changeable to some extent. Things like Head Start programs, adequate neonatal nutrition, stable living arrangements, and so on.
Raging Bee says
…this bizarre idea that we ought to consider the happiness of hypothetical, imaginary human beings far more important than the happiness of real individuals in the here and now. I can’t even…this is crazy cultist bullshit.
It’s ESCAPIST bullshit, dreamed up by people who simply can’t deal, and just don’t want to deal, with the grubby complex present-day reality where they’re not the most importantest people at the center of all attention and action. They’re pretending to care about imaginary people in an imagined, unforeseeable far future, because they can’t deal sensibly with real people in the real present and foreseeable future, and have absolutely nothing worthwhile to offer to real people in the real world.
raven says
There is nothing wrong with being concerned with the future and trying to make the future earth a better place. Almost all of us do that. I’d guess that is why PZ Myers started and runs Freethoughtblogs.
It isn’t like the Long Termists have an original or novel idea here.
They then take a simple idea and go right off the rails.
.1. The best way to make sure the long term is good is…to make sure the near term is good!!!
We can’t do much about the Long Term since it is unknowable and we won’t be there.
But we can do a lot about the present and near term future.
The best way to even have a Long Term is to make sure we have a Short Term.
The Far Future comes out of the present and Near Future and there isn’t any continuity breaks in time.
.2. The Long Termists go on and on about trillions of imaginary people that might exist in the far future. Those people don’t exist and might well never exist. They are imaginary.
The people alive today and the ones who will be alive for decades in the future are real people. They have lives too and they matter a lot.
If we are serious about improving the lives of humans, we need to start with real humans.
There is a huge amount we can do to improve the lives of the current 8 billion people on this planet.
Marcus Ranum says
We know that ChatGPT outperforms humans on SATs and IQ tests – so whatever “intelligence” is, humans are low on the spectrum, 30 points behind AIs, arguing about 1 or 2 point differences.
Marcus Ranum says
And again I need to stick up for Rousseau’s point about natural inequality: if someone is born less smart than another, society’s job is to lift them up and support them, not to rain shit and derision upon them.
Scientific racists are just trying to ratify the status quo: white people are at the top of the power structure because apparently they are more violent and rapey than everyone else, therefore, uh something, they are better at climbing over heaps of bodies. White man’s burden is to be a horrible bunch of assholes.
LykeX says
Any plan of sacrifice that doesn’t include the person suggesting it volunteering to be first, is highly suspect.
birgerjohansson says
If we want to provide for people of the future maybe we should mimic that South Park episode, set up a time portal and let people from the future migrate here to work.
.
BTW if there is a genetic correlate for feeling empathy and pity we might need to make extra copies of those genes just as we might make extra copies of the P53 gene against cancer.
jo1storm says
This one was about abortion but I think it fits here:
When we are talking about trillions and billions of “future people” they are literally talking about people not yet born. And I consider problems of non-existing not-people to be not as important as the rights and problems and thoughts of existing people.
hemidactylus says
Umm…let’s not throw out the yet to be born babies with the bathwater of long-termism. There is an argument to be made for intergenerational responsibility in both temporal directions. Historic preservation (not of confederate statues or Stone Mountain) and righting the past wrongs of bigotry go in one direction. Environmental regulation, fixing current injustices, and building infrastructure go in the other. Just because some rich assholes have a warped sense of futurity (wanting to swamp the gene pool with their allegedly superior seed in some pronatalist eugenicist scheme) doesn’t mean we don’t have intergenerational duties largely unmet. One could say some antinatalist arguments for not bringing kids into this vale of tears or to meet a certain inevitable death are a kind of moral obligation. Fuck long-termism itself, but don’t totally fuck up the world for future generations. Why reduce our carbon footprint if @13- jo1storm‘s “I consider problems of non-existing not-people to be not as important as the rights and problems and thoughts of existing people” holds?
jo1storm says
Personal carbon footprint I don’t care about, because it is a metric that was invented by oil companies and all sorts of polluter corporations to take the heat of what they are doing to the planet. One corporation’s carbon footprint is more important than million of people’s. Which is part of the point, because that corporation is certainly not thinking of rights and problems and thoughts of existing people and corporation is definitively in not-people category. People working for corporations are people, but corporations themselves are not people.
It kinda holds, because if current people care about their future and future of their children and grandchildren, then those thoughts and problems are important. Which is 50 year deadline, more or less. As simple as that. If they don’t care, then there are plenty of people who do care and think about that. Which is quite different than “long termist” idea of maximizing future happiness of non-existent people thousand and hundreds years in the future.
Raging Bee says
“The unborn” are a convenient group of people to advocate for. They never make demands of you; they are morally uncomplicated, unlike the incarcerated, addicted, or the chronically poor…
…or newborn babies, who immediately start crying and fussing and acting all entitled and shit, and show absolutely no gratitude toward hard-working adults.
Just because some rich assholes have a warped sense of futurity (wanting to swamp the gene pool with their allegedly superior seed in some pronatalist eugenicist scheme) doesn’t mean we don’t have intergenerational duties largely unmet.
No one here is saying any such thing — we are, in fact, acknowledging that this “warped sense of futurity” is, in fact, nothing but a self-serving fantasy nurtured for the sole purpose of ignoring and disregarding real intergenerational duties toward both real people and people we know will be born very soon and who will need real accommodation.
chrislawson says
I know it’s mostly a matter of wording but…
I agree completely that people should be given equal respect and justice regardless of their genetic combinations, but I can’t go as far as respecting the combinations themselves when they include Huntington’s disease or epidermolysis bullosa. I would be in favour of eradicating these conditions if it could be done without the nightmare ethics of eugenics — say, if we one day develop effective gene therapy. Of course, most genetic conditions are not nearly so dramatically awful and this level of confidence only applies to a tiny subset of genetics.
Having said that, I completely agree that we need to reframe teaching of genetics (not meaning you, PZ, but especially at high schools and medical schools) to recognise that Mendel is just the tip of the iceberg, that every organism including humans has a shuffle of genes rather than a preordained ideal mix, that there is no such thing as a “pure” or “perfect” set of genes, that every one of us carries a handful of lethal recessive genes that could match up badly with any potential reproductive partner, that any attempt to eradicate all potentially harmful alleles would also aggressively constrain our species’ genetic variability, and that even the most ostensibly fit and successful humans owe a lot more to their environment than their genes.
We tend to teach Mendelian genetics at school because it is important and its elements are easy to grasp, but it leaves far too many people thinking genetics is simple without realising that Mendelian inheritance is in fact a highly simplified model even on its own terms (by which I mean even classic gene examples used to teach Mendelian inheritance don’t behave as discretely and predictably as they are presented: ABO/Rh blood type inheritance is a common teaching example usually presented as absolutely deterministically inherited once the gametic shuffle of genes has taken place, but it is in fact very far from simple: there are large populations for whom the standard tests are not reliable with serious clinical implications, and even more dramatically, people with acute myeloid leukaemia have been known to change blood type). We also need to push back on the simplistic and often racist ideas that seem to swirl around the ancestry DNA testing industry.
Imagine for a moment that the usual eugenicists with their simplistic Laplacian view of genetics managed to bring in broad population screening for common genetic markers that are considered “less fit”. This would almost certainly include the HLA variants associated with type 1 diabetes…which might have prevented the birth of Alexander Zverev, diagnosed with T1DM at age 3, told he could never be a professional athlete because of it, and yet managed at one point to be the #2 ranked male tennis player in the world. So much for the deterministic predictive value of genes on “fitness” (and yes, I know I’m mixing the sport and reproductive meanings of “fitness” here, but I have no qualms in this instance since this is exactly what the eugenicists and so-called “race scientists” do all the time with their barely-disguised Olympic Aryan norms).
chrislawson says
(I should also add that blood types are not classically Mendelian as Mendel didn’t account for co-dominant alleles such as AB blood type, but it’s still often taught as Mendelian.)
chrislawson says
jo1storm@15–
Thousands of years? One of the longtermists was talking about having a moral obligation to our 10^n descendants in the Virgo Supercluster. That’s 65 million light years away!
(…where n is a number pulled from his arse)
Raging Bee says
chrislawson: Yeah, at least some of those longtermites are “counting” humans “living” in some glorious future virtual reality, not just organic humans in meatspace. Which just goes to show how totally disconnected from any sort of reality they really are.
gjm11 says
I’m a bit baffled by some of PZ’s statements about EA. (Disclaimer: I am not affiliated with any EA organization, don’t hang out at e.g. the EA Forum, but am generally sympathetic to the movement.)
“Incessant chatter about IQ”. What incessant chatter about IQ? I mean, of course there’s plenty at the moment because everyone’s talking about the Bostrom email, but beyond that? If I search the EA Forum for posts that mention IQ, then aside from recent Bostrom discussion the sort of thing I find is https://forum.effectivealtruism.org/posts/SgqBAeoCbQeLxmMoj/targeted-treatment-of-anemia-in-adolescents-in-india-as-a (maybe we should direct resources to treating anaemia in Indian adolescents because anaemia is bad for lots of things, one of which is cognitive performance) and https://forum.effectivealtruism.org/posts/m2tgGev5XpkTkvERL/cause-area-developmental-cognitive-neuroepidemiology-1 (maybe we should direct resources to studying diseases that affect early brain development), neither of which sounds (a) unreasonable, (b) eugenicist, (c) racist, or (d) particularly IQ-obsessed.
(There is some other more “chattery” stuff. For instance, the very first search result I get is a post entitled “Consider raising IQ to do good”. The post itself isn’t terribly unreasonable, but more to the point it’s from 9 years ago. But of the chattery stuff, it looks to me as if at least as much is saying “we should be thinking less about intelligence” as is actually chattering about intelligence. And one of the first-page search results that looks like IQ-chatter actually turns out to be an April Fool.)
Toby Ord’s book “The Precipice” seems like a reasonable place to look for somewhat-representative longtermist EA opinions. It doesn’t mention IQ at all. There are lots of mentions of “intelligence”, almost all of them as part of the phrase “artificial intelligence”, and none of them talking about e.g. differences in human intelligence, or suggesting that humans should be made more intelligent, or anything of the kind.
If you tell me there’s more chatter about IQ among EAs than (say) among the average group united by a common interest — say, Hindus or communists or philologists — then I’ll happily believe you. But “incessant” doesn’t seem like it fits the facts.
“Lack of relevant qualifications”. I’m sure it’s 100% correct that the people PZ mentions don’t have qualifications that would give good reason to pay attention to what they say about eugenics. But so far as I can tell most of them don’t in fact say much about eugenics (e.g., PZ mentions Scott Alexander and Eliezer Yudkowsky; I’m about 80% confident SA is in favour of some versions of eugenics[1] and about 95% confident EY is, but so far as I can tell neither talks about it much[2]). And I don’t understand in what sense e.g. SA and EY are “big guns of EA”. They’re somewhat-prominent people (SA much more than EY these days, I think) who are publicly in favour of EA, but I don’t see how that makes anything they do EA’s fault. Likewise Robin Hanson and Sam Harris.
[1] Of the “provide positive incentives for some people to have more children” kind rather than the “mass murder” kind. Still problematic for all sorts of reasons that I’m sure they are familiar with and probably have responses to, but several steps short of “literally Hitler”.
[2] Torres cites three things from SA and four from EY, most of which have nothing much to do with eugenics or genetics. I don’t think it’s productive to complain that someone occasionally mentions a topic on which they are not a credentialled expert. (Does PZ need credentials in political science to write the occasional blog post saying that the people running North Dakota are fascists because they’re trying to ban books, or credentials in engineering to write one saying that Tesla’s “self-driving” cars are dangerous? Of course not.)
I read the Emil Torres piece PZ links to. It consists mostly of guilt-by-association. Toby Ord is a leading longtermist. — Toby Ord writes that we should hope that humanity will transform itself into something better. — One of the earliest advocates of transhumanism was Julian Huxley. — Julian Huxley was a eugenicist. — “Therefore” longtermism is a form of eugenics. (“Eugenics on steroids”, as Torres puts it.) Or: Sam Harris has defended Charles Murray’s racist views. — The Future of Life Institute once put on a public discussion with both Sam Harris and Nick Bostrom on the panel, and Harris wrote a blurb for Will MacAskill’s book. — Nick Bostrom and Will MacAskill are leading longtermists. — “Therefore” longtermism is deeply infected with Murray-style racism.
This is the same sort of reasoning that right-wingers used when they claimed that Barack Obama was “palling around with terrorists”, or that fundamentalist Christians use when they blame the Nazis’ murderous eugenics programme on Charles Darwin.
StevoR says
@19. chrislawsonc :
Well, technically, because its very, VERY large the Virgo supercluster is also here too :
Source : https://en.wikipedia.org/wiki/Virgo_Supercluster
Emphasis addded.
The Virgo cluster is about 53 million ly away and contains (probly?) the nearest giant elliptical galaxies incl. Messier 87
But yeah. The idea of galactactic collonilaism and conquest is probklematic even for “just” our Milky Way let alone the Local group of Galaxies let alone the more distant pasrts of the Virgo supercluster is problematic to say the least!
raven says
@21
The EA/Long Termist movements aren’t even as close to as pure as you are claiming.
FFS, have you ever heard of Google? It’s a search engine.
It took me a few seconds to turn up all sorts of links between EA/LT and various racists.
Here is one.
This flat out Nazi is Kevin DeAnna who wrote, “DeAnna’s declaration that his goal of “total Aryan victory”
Well, OK gjm11, I would hope that a statement like that isn’t normal in your fantasy world, but I wouldn’t bet on it.
Peter Thiel is a cuckoo lunatic fringe figure.
He is also very rich and one of the key financial backers of EA/Long Termists as well as the GOP and various racist Loonytarian organizations.
You can’t have leaders of a movement like this and then claim they aren’t racist.
And yeah, Bostrom is one of they key figures and an admitted vicious racist.
raven says
Another prominent supporter of Long Termism is…Elon Musk.
Since he bought Twitter it has been overrrun by Russian trolls, racists, antisemites, and outright Nazis.
Musk himself is walking his talk and conducting his own Eugenics program. I think we are up to 9 children by 3 different women now.
The English language lacks the words to express my complete contempt for Elon Musk.
If he is for something or involved in something, that is an enormous red flag.
John Morales says
gjm11 @21, your google-fu is weak.
A few seconds’ worth of searching yields:
Population with high IQ predicts real GDP better than population …
https://forum.effectivealtruism.org › posts › population-
Take part in our giant study of cognitive abilities and get a …
https://forum.effectivealtruism.org › posts › take-part-in...
Consider raising IQ to do good – Effective Altruism Forum
https://forum.effectivealtruism.org › posts › consider-ra...
Does it matter that some EAs think black people are stupider …
https://forum.effectivealtruism.org › posts › does-it-mat...
Predicting Polygenic Selection for IQ – EA Forum
https://forum.effectivealtruism.org › posts › predicting-...
[several hundred other articles not included in this selection from the top of the list]
GerrardOfTitanServer says
I came here to write something, but chrislawson in 17 has it covered.
gijoel says
I’ve become very skeptical of any movement that Peter Singer is involved in after he said it was okay to rape a severely intellectually disabled man, as he probably enjoyed it.
chrislawson says
StevoR@22–
Thank you. I worded that badly. But I think we can both agree that the point still stands. Some longtermists are trying to count humans from civilisations millions of light-years away and lord knows how far in the future. It’s so far away in both space and time as to be completely unknowable, and also ridiculous — nobody holds our Australopithecus ancestors to moral account for the quality of life of humans living on distant continents today, and they were a lot closer in both time and space to our possible intergalactic descendants.
raven says
I just read it too.
It does no such thing.
He knows what he is talking about, knows the people since he was once one of them, and makes his case with extensive quotes. It’s worse than I thought and already I thought these were internet class trolls with offices in places like Oxford.
A sample quote from Torres.
I never even heard of EA/LT a few weeks ago, Until Bostrom showed up on the front pages. I already despise it as simple minded and evil.
MacAskill and Musk are just wrong here.
What is limiting technological progress isn’t the earth’s population size of 8 billion. It’s money, resources, etc..There are billions of smart people born into poverty or from disadvantaged backgrounds that never get to a famous university in Germany like Einstein did or the ones in the USA.
When these trolls talk about Transhumans, how do you get Transhumans. From Amazon.com???
You don’t order them online.
You make them. By Eugenics most likely. Selective breeding. Genetic engineering. Cyborgs. Most likely all three.
Bostrom is an idiot troll as well as a racist.
The correlation between educational attainment and smaller number of children has been around for a while.
In that time, the Flynn effect has been seen.
IQs are going up steadily in real time in the developed world.
He is empirically wrong and too dumb to ask a geneticist what is actually happening.
gjm11 says
@raven #23, I never made any claim about anyone or anything being “pure”. But the Thiel thing is yet more guilt-by-association. Kevin DeAnna is an outright fascist! Peter Thiel had dinner with Kevin DeAnna and enjoyed his company! Peter Thiel has given money to some organizations that EAs have also given money to! Therefore EAs are racist!
I agree that Kevin DeAnna is an outright fascist. I agree that Peter Thiel is at least kinda fascist. I do not agree that any of this tells us anything about the political or social views of the EA community as a whole. So far as I can make out, the actual Thiel-EA links are as follows. 1. For a while he was a major donor to the Machine Intelligence Research Institute, an organization some AI-safety-focused EAs also donate to. My understanding is that he hasn’t been a MIRI donor since ~2015. 2. He was, and maybe still is, a major donor to Leverage Research, which about 10 years ago was associated with EA (and e.g. sponsored some EA Summit meetings). My understanding is that Leverage Research has had no association with EA for some years now. 3. Perhaps on account of #2, he was a keynote speaker at EA Summit meetings in 2013 and 2014. 4. That’s all. It’s quite apparent now that Thiel is at least kinda fascist, but I don’t think it was so apparent in 2013 or 2014 (though there were warning signs before then), which seems to have been when he last had any association with the EA movement unless you count “Thiel donates to Leverage Research, and Leverage Research claims to be EA-aligned”. I’m pretty sure there is zero danger of Thiel being a keynote speaker at any EA conferences in the foreseeable future. #1 and #2 seem to me no more indicative of anything wrong with EA than the fact that Hitler was a vegetarian indicates anything wrong with vegetarianism. I think #3 shows likely bad judgement or bad faith on the part of at least some people involved in organizing the 2013 and 2014 EA Summit meetings, but I see no reason to blame that on EA as a whole.
@John Morales #25, I didn’t say no one in EA discussion has ever talked about IQ. I said I don’t think there’s “incessant chatter about IQ” there. It looks like there are on the order of 10 posts per day. Using the forum’s search facility for “IQ” turns up 212 posts. About half of those hits appear to be things where some string of random characters in a URL happens to contain “iq”, or ones mentioning an author called Iqbal, or “IQR” (interquartile range), etc. The EA Forum has been around for many years but search results aren’t uniform across that time; I guess volume has increased; let’s simplify and suppose 3 years of uniform-ish posting. That suggests somewhat over 10k posts to produce those ~100 that talk about IQ, so 1%.
That’s probably quite a bit more than, say, the fraction of New York Times articles or Pharyngula posts that talk about IQ. But I don’t think it’s “incessant”. For context, there are ~170 posts mentioning “racism”, ~650 mentioning “justice” of which ~180 mention “social justice”, ~100 mentioning “BIPOC”. ~140 mentioning “computer games”, ~60 mentioning “poetry”, ~100 mentioning “etymology”, ~220 mentioning “elephant”. I don’t think this indicates that EA Forum posters are chattering incessantly about elephants, and I don’t think it indicates that they’re chattering incessantly about IQ.
I looked at 10 random posts that mention IQ (sampling as uniformly as I could, skipping ones that just have “IQ” in a random string of letters etc.) to get some sense of what EA Forum posters are actually interested in about IQ. Two were proposing EA interventions aiming to improve people’s brains (intelligence, happiness, etc.; neither was specifically about IQ); these both looked pretty misguided to me, and I agree that their authors may well be overrating intelligence. Two were about IQ as a desirable quality for EAs (one making suggestions for recruiting for EA organizations looking for “smart people”, one investigating the cognitive effects of binge drinking because the author was worried about some of her acquaintances); the first might indicate overrating intelligence, the second seems reasonable. Two involved their author looking for an example of something they would like (in the spirit of “… and a pony”) and picking a big increase in IQ; in both cases this was one of several things; doesn’t seem obviously unreasonable. Three were about entirely different things and mentioned IQ in passing; in none of them did it look to me like IQ was being shoehorned in because of some kind of obsession. And one was Bostrom fallout. It looks to me as if the EA community thinks more about IQ than it needs to, but it doesn’t look super-unhealthy to me and, again, they are only chattering incessantly about IQ if they are chattering twice as incessantly about elephants.
@raven #29: MacAskill may well be wrong if he thinks that fewer but more scientifically gifted people could make as much scientific progress. (It’s hard to tell exactly what it is he thinks from Torres’s summary, and I do not trust Torres to summarize accurately and charitably, but for the sake of argument I’ll assume his summary is accurate.) I bet you’re right that if we took the population we’ve got and managed to get decent resources and education and opportunities to everyone, it would be a huge improvement in productivity as well as in justice. But what does that have to do with accusations that EAs, or longtermists (note that these are not at all the same sets of people), are eugenicists or racists?
The stuff about “posthuman stock” is just words Torres has made up, and the parallel he’s drawing amounts to this: eugenicists said they wanted the human race to be better, and so do transhumanist longtermists (I am not convinced that all longtermists are transhumanists, even if it suits Torres’s rhetorical purposes to suggest that they are); therefore longtermists are the same as eugenicists. I don’t find this a persuasive argument. If I take everyone’s favourite example of eugenicists, namely the Nazis, and ask “so, what was wrong with their programme?”, I think “they wanted the human race to be better” would be far far down the list. Well behind, e.g., “they tried to achieve their goals by mass murder” and “their idea of what constitutes better was appalling”.
(There are certainly moral and technical hazards aplenty around trying to make the human race better. But I don’t see any reason to think that the whole idea is necessarily evil. E.g., imagine the following — probably hopelessly optimistic — scenario: it turns out that there’s a single-point genetic variation that makes a 10x difference to one’s chance of getting Alzheimer’s disease, and despite looking long and hard no one can find any advantage the higher-risk variant confers; and it turns out that there’s a way of doing gene therapy that can reliably and safely replace the higher-risk variant in your reproductive system with the lower-risk variant, thus making your children and any later descendants who get that gene from them much less likely to get Alzheimer’s. I claim that in that scenario, once a whole lot of testing has been done, widespread deployment of that therapy would in fact “make the human race better”. Do you disagree?)
Yup, the Flynn effect does seem to be a reason to be skeptical about the sort of dysgenic process Bostrom describes being an actual threat. What an idiot Bostrom is for not thinking of that! Let’s take a look at what Bostrom actually writes in the next paragraph after what Torres has quoted, instead of citing the Flynn effect as a reason not to worry.
“However, contrary to what such considerations might lead one to suspect, IQ scores have actually been increasing dramatically over the past century. This is known as the Flynn effect; see e.g. [51,52]. It’ s not yet settled whether this corresponds to real gains in important intellectual functions.”
Oh. So it turns out Bostrom does know about the Flynn effect and understand that it’s relevant. So is he worried about “dysgenic” effects anyway?
“In any case, the time-scale for human natural genetic evolution seems much too grand for such developments to have any significant effect before other developments will have made the issue moot [19,39]”
Nope, he thinks any such effect will be too slow to matter given all the other relevant things (such as technological progress) that happen on much faster timescales.
StevoR says
@28. chrislawson :
Definitely.
Astronomical pedantry aside, its like a flea on the top of one dog hair yelling out that they are going to be the masters of all the hair of every dog in every pound on the continent. Or children boasting about who’s gunna be teh bestest eber :
(Toddler to not so bright ten year olds voices)
Child 1 : I’m gunna rule the Earth one day!
Child 2 : Well, I’m gunna rule the whole solar system!
Child 1 : I’m gunna rule the Galaxy!
Child 2:: I’m guna rule the, the, local galaxy group..no, the whole Virgo supercluster!!
Child 3 : I’m gunna rule the whole universe and infinity plus one and no takebacks! I won!
How this would actually work and whether its even remotely desireable or attainable is, yeah, nah.
I love my SF dreams and bold visionary goals and aspirations more than most but, even to me, this longtermite stuff seems like the economic cult of infinite growth and a modern religion substituite. About as immediate and necessary a priority and as reasonable a near term dream as King Agamemon of the Trojan War planning and dreaming of conquering all of Mars then taking over Pluto to boot. Of course, Agamemenon came to a nasty end in a bath with an axe thanks to his rightly enraged wife Klytemnestra after sacrificing their daughter Iphigenia (https://en.wikipedia.org/wiki/Iphigenia ) for the sake of the future victory in the Helen-ic war so, yeah, that’s where sacrificing real people for the sake of distant future gains gets you. – mythically speaking. (Poor Kassandra tho’ – she didn’t deserve her demise or enslavement – but that’s a whole other Greek myth on the value of foreknowledge or lack thereof.)
Thinking about thsi today, it also makes me wonder what Musk, already the world’s once richest man expects to be rewarded with? Maybe those fictional future god-like descendents could have give him a warning NOT to take over Twitter and embarrass himself with reichwing Conspiracy cow chutney? (In which either ideology debunked and they didn’t exist or care to warn him and save him from his current predictament or Musk listened to them about as much as King’s Priam and Agamemnon listened to Kassandra.) Or maybe the hypothetical all powerful people of the future dislike racists and white Supremacists as much as we do? The future is NOT likely to be rule by white people or skin colour purists after all,
Oh and when it comes to having lots of descendents and a huge genetic legacy there’s one example from nearly a thousand years ago – give or take a century or five – we have the example of Temujin / Chingis Khan. Is he someone that we would reward and honour today even if we could somehow do so? Would w e now punish peasants and artisans of distant places in our ancient past for not leading directly to our creation or making our world a better place now? The whole idea is just .. yeah .. (Gestures vaguely..)
raven says
gjm11, you’ve descended to hand waving and a Gish gallop word salad to defend your heroes.
That isn’t going to go over here. We know how to read.
You need to knock that guilt by association claim off.
It is simply wrong and easily shown. If we take out all the creeps, trolls, fascists, eugenicists, and racists from the EA/LT movement, there wouldn’t be anyone left. I looked hard for anyone who didn’t fit into those categories and didn’t find anyone.
Some of the key main supporters like Peter Theil and Elon Musk are simply monsters that our society has thrown up.
To take one example of you making stuff up.
This is just a lie.
Bostrom is a flat out Transhumanist deeply involved with the movement.
He wants to genetically engineer humans to be Transhumans, that is better humans.
That is what Bostrom says.
If that isn’t a form of Eugenics, then nothing is.
You may have the last word.
My time is valuable and I’m not wasting any more on some creepy troll from the EA/LT swamp.
You all are pretenders anyway.
No one is going to follow your cuckoo plans to set up a paradise in the Andromeda galaxy a million years from now, populated by Digital Transhumans. We’ve got real plans and real things to worry about in the real world that we live in.
raven says
I had never heard of Bostrom until a few weeks ago, when his racism showed up on the front page.
The more I looked at him, the creepier he is and the dumber he is.
I have no idea how Oxford ever thought making an internet troll into a professor was a good idea.
And yeah, Bostrom is one of the leading Transhumanists.
As I mentioned on the previous thread a week or so ago, Bostrom simply repeats very old ideas that he gets from science fiction, fantasy, comic books, movies, and other popular culture sources.
A high school kid could do this.
Erp says
@33 Raven
“I have no idea how Oxford ever thought making an internet troll into a professor was a good idea.”
He sort of has the title but he is not regular faculty but rather
from https://www.philosophy.ox.ac.uk/people/nick-bostrom
Membership Type: Fixed-Term Tutorial & Research Fellows
John Morales says
Erp, from your link: “Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University”. So, more than just “sort of”.
hemidactylus says
Two transhumanists I am most familiar with are Ray Kurzweil and Martine Rothblatt. Both are rich and somewhat misguided but neither are evil. Kurzweil invented technologies like the Kurzweil reader to benefit the disabled. Rothblatt developed satellite radio (Sirius XM) and is a transwoman. Both go off the rails a bit with their advocacy of transhumanism. Kurzweil’s motivation is to bring his father back from the dead, which is an all too human desire. Rothblatt founded a sorta religious movement which is eccentric:
https://en.m.wikipedia.org/wiki/Terasem_Movement
As for effective altruism It seems to have gone way too far, and I prefer a pluralistic deontology like WD Ross’s prima facie duties over utilitarianism, consequentialism, or eudaemonism, yet Rebecca Watson partially captures it as “critically evaluating our charity and finding the best way to do the most good with our resources” but then EA can go off the rails if turned into some monomanic venture. Her analysis is at: https://skepchick.org/2022/12/how-reasonable-philosophies-led-to-ftxs-crypto-scam-collapsing/
Stuff like: “So all of those things sound pretty reasonable to me, but recently I realized that unlike me, there are people who take all three of those ideas very, very seriously and it has some pretty terrifying results… “effective altruism” is not just about trying to do the most good with our resources, but committing 100% of our resources to only the ABSOLUTE BEST way to increase total happiness”.
John Morales says
[OT but related]
I rather like the Long Now foundation, USAnian as it may be.
Better and simpler aspirations, long term thinking.
—
And I like their proposed steelman argumentation model:
(https://en.wikipedia.org/wiki/Long_Now_Foundation#Long_Now_Foundation_debate_format)
gjm11 says
@raven #32: Evidence, please, that Kelsey Piper and Derek Parfit (two people picked from Wikipedia’s category page for “people associated with Effective Altruism” are creeps, trolls, fascists, eugenicists, or racists? (Parfit died a few years ago, so I guess I should say “were” in his case.)
I am not making anything up. (I might be making mistakes; I am far from infallible.) Torres says this:
“Posthuman stock”: it’s in quotation marks, even though no one other than Torres used those words; ‘population of “posthuman stock”‘ is plainly intended to give the reader the impression that “longtermists like MacAskill” think of (post-)humanity in the same way as a farmer thinks of their herd, and that impression is specifically dependent on those words, and so it matters that those are not in fact words anyone other than Torres has used. (I dunno, maybe they have, but if so Torres hasn’t given any evidence.)
You say I’m making things up when I say that the actual content of Torres’s parallel amounts to the fact that both eugenicists and longtermists have an avowed goal of a better human race. And then to demonstrate that I’m making things up, you quote some things showing that transhumanists in general, and Nick Bostrom in particular, have the goal of a better human race. I am not quite sure how this shows that I am making things up.
(Also, literally everything you quote Torres as quoting here is transhumanists saying “we disapprove of X”, X being some thing generally associated with eugenics. Presumably the point is that “the exception proves the rule”, that the fact that they’re disapproving those specific things shows that they approve of other things, but it seems to me you should be quoting those other things.)
And there’s the same chain of guilt-by-association going on here as always. “Nick Bostrom believes X”, “transhumanists believe X”, “longtermists believe X”, and “advocates of ‘effective altruism’ believe X” are all separate propositions, and you’re not distinguishing between them at all. I am happy to stipulate, and something like it may very well be true, that Nick Bostrom is an unreconstructed racist bigot who would like to enslave us all and breed us like cattle; that would tell us nothing about the beliefs or intentions of, say, Will MacAskill or Toby Ord.
As for your last paragraph: you may make whatever assumptions about me you like, but as it happens I have no interest in setting up a digital paradise in the Andromeda galaxy. And, as for your first paragraph, these people are not my heroes; e.g., I completely agree that Bostrom seems like a nasty person. And I don’t think something can be called a Gish gallop when literally everything in it is a direct response to someone else’s claims. It’s the exact reverse of a Gish gallop.
Brony, Social Justice Cenobite says
@John Morales 37
I like that too. Utter refusal to acknowledge the content of a political position is a problem.
Raging Bee says
…This leads MacAskill to argue in “What We Owe the Future” that if scientists with Einstein-level research abilities were cloned and trained from an early age…
WTaF seriously?! Since when were “research abilities” genetically-determined and cloneable? That’s even dumber than saying “intelligence” is genetically determined.
And there’s the same chain of guilt-by-association going on here as always…
Well, yeah, we normally judge people by, among other things, who they CHOOSE to associate with, especially when they do so in organizations created to promote shared ideas or beliefs; and we get justifiably suspicious of people who hang with bigots or loonies and speak positively of them. (And in the case of Peter Thiel, it’s not just “guilt by association,” it’s guilt by overt support and possible financial assistance, since Thiel is a very rich man known for quietly supporting some very evil and anti-democratic causes while the rest of us are constantly distracted by Elon Musk.)
Also, I would like to note that transhumanism is NOT the same thing as eugenics or longtermism — although there clearly is a lot of overlap between them; and they may have become much closer since I first heard of transhumanism. I remember hearing about transhumanists talking about non-racist and non-eugenics things like cybernetic enhancement of physiological and neurological capabilities, and cyborg-style add-ons enabling humans to stay alive and function in less-friendly environments like Venus or the ocean floor.
Raging Bee says
As for what people are now calling “effective altruism,” that’s always been something GOVERNMENTS are created to do: bring all the people and their resources together and decide how to most effectively invest said resources for the maximum benefit of as many of their people as possible — and then enact the chosen solutions in law, policy, enforcement, taxation and spending/allocation.
The central flaw of today’s “effective altruism” is that it’s “practiced” by clueless idiots and plutocrats who have already discounted the whole idea of government doing anything for anyone (except themselves of course); so all they’re left with is bullshit, scams, and ideas cooked up by people who have no clue about the real-world people and problems they’re pretending to care about.
gjm11 says
@Raging Bee #40: Yup, I’m all in favour of judging people (at least somewhat) on who they choose to associate with. But only to the extent that it actually tells us something about them.
In the case of Peter Thiel, I think this mostly goes the wrong way. For a while he chose to associate himself with some EA-related things. That isn’t EAs’ fault. The fact that the people who organized the 2013 and 2014 EA Summits picked Thiel as a keynote speaker is their fault, though as I say Thiel wasn’t so vocally fascist-y then as he is now. But e.g. the fact that for a few years Thiel gave a lot of money to MIRI doesn’t mean that anyone at MIRI is a fascist, it just means that sometimes fascists are interested in possible risks from artificial intelligence.
I am glad we’re on the same page about transhumanism versus eugenics versus longtermism. I think Torres is deliberately, and dishonestly, confusing them with one another. But, to be fair to “the other side” here, Torres is talking about a particular aspect of transhumanism here, the sort that’s involved when you say things like “if the human race were genetically engineered to have greater research abilities”, and that’s definitely more eugenics-y than the sort of transhumanism that just wants us to be able to become cyborgs. (I don’t personally find it morally culpable, though it might be morally dangerous as well as technically naïve, because I think what’s happening when people say things like that is that they have a vague fuzzy notion in their head of a Smarter Stronger Better Immortal Posthuman Race and aren’t particularly thinking about how it might be achieved. It may well be that there’s no route from here to there that isn’t paved with skulls, or that there’s just no route from here to there at all, but they aren’t choosing the skulls, they just haven’t thought it through. Or, for that matter, maybe they have thought it through and found what seems to them to be a route not paved with skulls.)
#41: Yes, governments should definitely be doing those things! But it turns out they aren’t doing them very much, or very well, and while of course we can try to get better governments by voting, campaigning, etc., that doesn’t reliably produce big changes quickly. So if you want (say) to reduce the number of people dying from malaria now, your best hope is to give money to organizations that are working on that. (Or go and work for those organizations, if you have skills they need. Or change government policy, if you happen to be a senior government official. But most of us don’t and aren’t.)
If you know of good reason to believe that (e.g.) GiveWell’s top charities (two focusing on malaria prevention, one on vitamin deficiency, one on vaccination) are bullshit, scams, or cooked up by people who know nothing about the people they’re supposedly trying to help, I would be very interested to know about it; I am a regular donor to one of those charities and if it’s bullshit then I should be sending that money somewhere else instead.
Erp says
@35 John Morales
“from your link: “Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University”. So, more than just “sort of”.”
But only on that page, he is not listed on the regular faculty page for the philosophy department as faculty. https://www.philosophy.ox.ac.uk/faculty-members?filter-421-membership%20type-2428736=2571
Oxford does make a distinction between “titular professors” and “statutory professors”. My guess is he is the former.
John Morales says
Erp, can’t dispute you there.
—
gjm11, good responses.
jo1storm says
Counterpoint: we, the people, can pressure elected officials. We can’t directly pressure corporations. Counterpoint 2: leaded gasoline. Counterpoint 3: those organizations need to and do work with governments to accomplish those goals. Some of them are even partially funded by governments!
gjm11 says
@jo1storm #45: I don’t understand how any of your counterpoints are actually relevant to what I thought was the point at issue. Raging Bee objects to “effective altruism” because it’s a thing governments rather than individuals should be doing; I say that if governments aren’t doing something you think valuable, it makes excellent sense for you as an individual to donate to an organization that is; how is any of that affected by how easy it is to pressure corporations, or by the existence of leaded gasoline, or by the fact that charitable organizations need to work with governments?
Perhaps somehow I gave the impression that I think governments aren’t important or something? I don’t think that. Governments are very important. But sometimes they can’t or won’t or don’t do something that matters, and in those cases the most effective way to get it done may be something other than trying to change the governments’ behaviour.
Incidentally: We-the-people can pressure corporations. We can stop buying their products or using their services, for instance. That doesn’t give any individual very much corporation-influencing power, but nor does any individual have much government-influencing power.
jo1storm says
My points are: It is easier to pressure a government than it is to pressure a corporation. If the government is not doing what it should, we should pressure the government to do so.
It kind of ddoes. The only way to pressure corporations is via “More money equals more votes” way, which gives people with money more power. So one billionaire has the same power and influence as millions of people.
As for leaded gasoline point: it was invented by corporate chemists and the only way to stop it was by governments and their power. Not by any charitable organizations, not by any corporation but by governments. There’s a good book called “Winners Take All The Elite Charade of Changing the World”. One of main points of the book is how rich elites are pushing charity as the “win-win” solution of all the world’s problem. Why? Because the actual and permanent solution to the problem at hand is some variant of “win-lose”, as in everybody else wins, but elites lose. Thus pushing of the “governments are useless, use charity”, “the solution is not higher min wage but saving mobile applications which help poor people track expenses” and similar narratives. One solution threatens status quo and power, the other doesn’t. And thanks to more money equals more votes, guess which one gets pushed and funded?
Raging Bee says
gjm11: The charity ops you refer to @42 are not what I think of as part of this recent “effective altruism” roadshow. They’re just rich people and corporations each raising money for causes they consider important, without pretending they’re part of some indispensable grand plan that’s the ONLY way to truly help people. My point still stands that while individuals and corporations can and do do a lot of good, such grand overall strategies as EA pretends to be are best managed by governments and state power, because only states can gather resources with the scale and consistency required, either for systemic change or for meaningfully improving conditions for millions of people at a time.
gjm11 says
@jo1storm #47: Yes, we should pressure governments when they aren’t doing what we want. But that doesn’t mean we shouldn’t also act more directly, or support other people who are acting more directly. I think maybe you’ve got the impression that I’m saying “governments are bad” or “we should try to bypass governments where possible” or something; I’m not saying any of those things. Governments are super-important, a lot of what they do is incredibly valuable, and there are some very important and valuable things that only governments can realistically do.
I’m just disagreeing with the idea that there’s something wrong about trying to do charitable giving effectively because improving the world should be the job of governments. There will always be important causes that governments aren’t supporting enough, and when that happens individuals should both try to encourage governments to support those causes more and (if they can afford to) give those causes some of their own money.
@Raging Bee #48: Those “charity ops” have always been central to “effective altruism”, and any understanding of “effective altruism” that doesn’t even think of those as part of it is (I think) a serious misunderstanding of EA. The main thing EA does in practice (though of course not by any means the only thing) is to direct resources to what are supposedly highly effective health interventions in poor countries. E.g., I took a look at Good Ventures’s database of grants, since that seems like it should be somewhat representative. They break it down by fairly broad cause areas. The biggest, at about 1/3 of the total and more than 2x over the next-biggest, was “Global health and development”. (Next biggest is “Scientific research”, which covers a variety of things, mostly also broadly health-related. Next after that is “Potential risks from advanced AI”, which doesn’t tend to get good press around these parts :-). Next after that is “Farm animal welfare”, then “Biosecurity and pandemic preparedness”, then “Criminal justice reform”.)
And, again, even if it’s true that grand overall strategies are best managed by governments … if, in fact, governments aren’t doing that (or are doing it badly wrong), then “do the best we can by coordination between like-minded individuals and organizations” may be the only option available that actually gets the job done.
jo1storm says
@gjm11 And I am not trying to disparage charities or giving to charity. What I am trying to tell you is that charities won’t be able to fix systemic issues. And will thus continue to exist in perpetuity.
Ok, let’s see what that means in practice. It means that if it costs to a million dollars to plant 100.000 trees in USA and you can plant 2 million trees for million dollars in India, then you should send million dollars to India. That’s effective altruism 101. But it also means that if there is health emergency because of bad water in say, Flint Michigan which will need 400 million dollars to fix, then it will never be fixed. Because Flint Michigan has around 400.000 residents, it means that you are going to spend 1000$ per citizen to fix the water crisis. 1000$ can bring clear water to 10-20 people somewhere else, if there is no corruption. So, by the rules of effective altruism, government should fix the lead poisoning it caused and until it does, screw people of Flint, Michigan. Because no charity should touch them with 10 feet pole since they are too expensive to help.
Again, we are talking about lead poisoning here. It affects people’s future and ruins it. Every one of those people in Flint was looking at earning between 800.000$-1.000.000$ over their working lifetime (counting 20k-25k yearly income 40 years). Let’s halve that for the ease of calculation. That makes 400k500k =20.000.000.000, also known as 20 billion dollars over 40 years. Half a billion dollars per year and lead poisoning lowers those earning considerably. Do you have any idea how many people in so called third world countries (better called developing countries) has to work to earn half a billion dollars per year?
On the one hand, you are helping a larger number of people. On the other hand, the same thing that makes the altruism effective in the first place practically kneecaps your future altruism and donation efforts by not fixing issues at home. That is all if you value things in dollars. Welcome to capitalism, mate! But, if we valued the effective altruism in something else, like the amount of labor, ingenuity and future production potential, you can really appreciate the change. If that is the case, welcome to socialism and communism :) Not that it is a bad thing. And that’s one contradiction of EA.
gjm11 says
@jo1storm #50: Yup, I agree that charities generally don’t have much power to fix systemic injustice and the like. But nor do most governments! I mean, the government in country X can do a lot about systemic injustices in country X, but suppose you’re in country X and you’re concerned about things going badly for people in country Y. Your government is the one you have some power over, but your government has little more ability to fix systemic issues in country Y than a charity does. (And some of the ways in which it has more ability, such as maybe being able to invade country Y and install a new government by force, are likely very bad ideas.)
Somewhere over 600k people die of malaria every year. I would prefer that number to be smaller. I can try to make it smaller by giving money to the Against Malaria Foundation, which distributes antimosquito bednets; there is some evidence that this does actually work, doesn’t do a lot of counterbalancing harm, and does a lot of good per unit donation because bednets are cheap and quite effective.
What are my options for making that better via governments? Maybe part of the problem is that some countries with a lot of malaria have governments that don’t care if their people get malaria, or something; but I don’t have any ability to change those governments, nor should I have. Maybe part of the problem is that the government of my (fairly rich) country doesn’t spend enough on foreign aid or spends it inefficiently; most of my ability to change that comes from casting one vote every few years, which has to express not only my opinions on foreign aid but also my opinions on taxation, healthcare policy, state benefits, etc., etc., etc., etc., and so far as I can tell no political party with any prospects of power in my country has a policy at all along the lines of “spend much more on foreign aid” or “target foreign aid according to a careful analysis of likely overall benefit”. I could write letters or sign petitions or whatever, but everything I can see suggests that that too has very little chance of making a substantial difference.
So I give some money to the Against Malaria Foundation, and they distribute bednets, and hopefully the people who have looked into these things are right in thinking that this ends up doing quite a lot of good.
That, to me, is what “effective altruism” mostly is, and I don’t see any reason why I should stop doing it merely because it would be better if the governments of malaria-stricken countries put more effort into reducing malaria and if the governments of rich countries gave them money (or whatever else they might need) to help them do it.
I’m not sure whether I understand your argument about planting trees in India versus fixing lead pollution in Michigan. If you measure shortish-term effects then helping people in India is much more efficient (in, say, QALYs per dollar) than helping people in Michigan. If you measure longer-term effects then perhaps improving the brains of people in Michigan so that they can do more valuable work (which helps other people, and gives them more opportunities to good in turn, etc.) might come out ahead.
(Though when EA people say that sort of thing, as occasionally some do, the reaction from EA-skeptical people like Emile Torres or PZ tends to be “look, they’re saying that it’s more valuable to help rich people than to help poor people! I always said EA was bullshit”.)
I’m not seeing any “contradiction of EA” here, though. All I’m seeing is that the world is complicated and making decisions is difficult. Helping more people versus helping people you have more connection with. Helping more people versus helping people who if helped will have more opportunities to do valuable things. Life is full of tradeoffs.
For what it’s worth, my gut says that (1) despite longer-term effects, an economic system that gives people in Michigan more opportunities to do valuable things, etc., helping people in India is still likely to do much much more net good than helping people in Michigan, and that (2) this absolutely is a case where the answer has to involve the word “government”, because it is entirely right and proper for a nation’s or state’s government to put more effort into helping “its own” people than people elsewhere, for many reasons including that a government that didn’t do that would rapidly find itself out of power. So, let the relatively rich and US-focused US government, and the relatively rich and Michigan-focused Michigan state government, help the people of Flint, even though that doesn’t do most impersonally-tallied-up good per dollar. But let’s also make sure that the relatively poor folks in e.g. sub-Saharan Africa, where they have a lot of malaria and not a lot of money (and, I am led to believe, often not very functional governments) get some help with their horrible health problems somehow. So far as I can tell, the best someone like me can do about any of that is to vote for governments that will help their own citizens and people abroad, and to give to charities that seem to be doing an effective job.
jo1storm says
Or organize and protest. Or organize and win the elections with like-minded people which is hard work. But that was not my point. My point was that very often charity alleviates the symptoms but doesn’t fix the core issues. Personally, I don’t think charities should use the money spent as a measure of efficiency. Yes, by all means, buy people mosquito nets. As long as you also ask why those same people don’t have the money or resources to buy and produce their own mosquito nets. It is all in how you define the problem and the solution.
John Morales says
jo1storm:
No ask, no buy. Got it. That’s exactly how to be altruistic.
jo1storm says
@John Morales
No, you didn’t “get it”. Because I didn’t write that, steelman.
How else are you going to define the problem, fix the causes and solve it permanently then by asking questions about causes?
I just restated that old “Teach the man how to fish” canard because there is wisdom in it.
John Morales says
jo1storm:
Direct quotation; what, you’re saying someone impersonated you?
How many times does the question need to be asked?
“No fish for you until you learn how to fish” type of thing?
—
Point being, these vagueries are all very nice, but not as nice as a fish, or a mosquito net.
jo1storm says
I have never written “No ask, no buy”, mate. But we both know that you don’t argue in good faith, so there’s that.
John Morales says
jo1storm:
That’s a paraphrase, but the semantics are the same, form being “do X as long as you also do Y”.
Implication is that without doing Y, one should not do X.
But fine, you’re now claiming it’s better to do X and Y rather than just X, and that’s to what you meant to refer all along.
(Of course, had you done that, I would not have responded)
You do know the name of that particular evasion, don’t you?
jo1storm says
@John Morales
That’s a bad paraphrase because you ignore the previous complete sentence which is literally:
And I never used the word “but” except in the sentence “but that is not my point. My point was that very often charity alleviates the symptoms but doesn’t fix the core issues.” . So you are putting implication when there is none but the plain meaning of the words I used. So, in short, you are arguing in bad faith.
Yes. The truth.
John Morales says
jo1storm, you’re not doing yourself any favour by being obstinate.
I quoted it verbatim — cut’n’paste — so I hardly ignored it.
I refer you to #53, which shows that you then immediately appended “as long as” — the conditional.
(“Yes, by all means, buy that fish, as long as you pay for it”)
It’s not even slightly confusing, and it’s becoming apparent how ironic your claim about my purported bad faith really is.
Yes, charity being the X, then you added the Y.
I’ve already addressed all this.
(So, by all means, be charitable, as long as you ask the question, right?)
That particular one is called argumentum ad nauseam. :)
jo1storm says
@John Morales
Eh, screw it. I don’t care about civility any more.
“Yes, by all means, stop the bleeding however you can. As long as you also take him to the hospital as soon as possible.”
Does that sound good to you? Or is that a bit too soft? Maybe this is better?
“Yes, by all means, stop the bleeding however you can. But you need to take him to the hospital as soon as possible. That gunshot wound is bad.”
How to hell do you get from those sentences to
“So, you are saying we shouldn’t stop the bleeding if we are not going to take him to the hospital. Got it!”
without arguing in bad faith is beyond my comprehension. Q.E.D.
jo1storm says
So yeah. First responders are effing important, but saying that they shouldn’t be the final responders and that first responders are not enough by themselve is by no means implicating that first responders shouldn’t exist.
John Morales says
jo1storm:
Because one could hardly be more civil than to assert their interlocutor is arguing in bad faith. Now that’s caring!
jo1storm says
@John Morales
It sure is. That’s being civil and polite. Because I could be much less civil than that. Have you ever wondered why multiple people on the public forum that is this site called you a troll and warn people not to engage with you?
It is because of the shit like this. You’re not stupid, you are not an idiot, you are just a malicious asshole.
John Morales says
jo1storm:
Jealousy. No biggie.
One should be civil, as long as one is not irritated. ;)
(Not everyone can be as nice as you)
—
So, you stand by your initial claim that one should “by all means, buy people mosquito nets. As long as you also ask why those same people don’t have the money or resources to buy and produce their own mosquito nets”?
jo1storm says
@John Morales
Keep believing that if you want.
Yes. Do both. You can do both, right? You are able to do that?
John Morales says
jo1storm, we’ve clearly reached the point of utter futility.
Well, that’s kind of the point — that one should not have to do both to be altruistic.
So. Can one buy people mosquito nets without asking why those same people don’t have the money or resources to buy and produce their own mosquito nets?
You originally wrote that it the buying was conditional on the asking, though admittedly one could do the asking without doing the buying without contravening your advice. But just asking is not that helpful, is it?
I refer you to my #57: “But fine, you’re now claiming it’s better to do X and Y rather than just X, and that’s to what you meant to refer all along.”
Obviously, that provisional concession has been superseded and you are now claiming one should do both X and Y. Me, I don’t think so.
(I like the way your story evolves)
jo1storm says
John,
Read it again. I never did that. You did, using your insane troll logic, as previously demonstrated in my post #60.
Sure, where did I claim otherwise?
We aren’t talking about just being altruistic here. We are talking about effective altruism. I can see where the confusion might lie, if you weren’t clever and just being malicious for the sake of being malicious.
I always claimed that. You just chose to misunderstand my claims using insane troll logic for the sake of arguing with me.
If you want to do most effective altruism, you should do both. If you want just to do altruism, you should do X. That’s what “by all means” means.
John Morales says
jo1storm:
Here (my emphasis): “by all means, buy people mosquito nets. As long as you also ask why those same people don’t have the money or resources to buy and produce their own mosquito nets”.
That’s where.
Proper effective altruism as you see it, not the featured EA.
Giving, as long as one also asks why the need.
Already told you I got it, right at the start.
Insane troll logic, eh? Other people call it first order logic.
Ahem. “by all means X”, “as long as Y” was what you wrote.
You can’t get away from it; X is conditional on Y as you phrased it.
If you didn’t intend the conditional, you should not have added it.
—
As I’ve already noted, quite futile to persevere, so I desist.
Point has been made, whether or not you acknowledge it.
jo1storm says
@John Morales,
Here we go another circle. Read #60 again, troll. Start to finish. And reply when you finally comprehend what I have written.
Meanwhile, story time about effective altruism and I’m done.
20 years ago when I was a teenager, I spent almost every weekend helping my grandparents. They were farmers, living half an hour away by car in the local village while me and my parents lived in the city. Almost every weekend I would go there and help my parents do whatever needed to be done on the farm. Feeding the pigs, cleaning the pig sty, tilling the soil, cleaning the yard… whatever needed to be done that week. But that’s not altruism. I was doing it with my family, for my family, and I certainly gained a lot from doing that. Ham, bacon, eggs, fresh vegetables, milk while grandparents still had cows… You can say that labor I offered and gave to my grandparents wasn’t free and was duly paid for. So that’s just work, not altruism.
There is accumulation lake near village. It is there to be used in the case of drought. You are not supposed to swim in the lake (not that it stopped anyone, especially in the summer), you are not supposed to fish in the lake and it can be argued that you are not even supposed to go near the lake because the only thing leading to the lake is holey half-ruined gravel road, while there are asphalt roads everywhere else in the village.
Finally, the altruism part. Every two to three weeks, after I was done working on the farm, I would go to the lake, sometimes alone but most of the time with some other kids from the village and we will clean the area around the lake of garbage. You’ll be shocked how much would be there from weekend to weekend, mostly remains of a fire, beer cans, plastic bottles and broken fishing nylon lines and such. And that’s the altruism part because nobody paid us to do that, we rarely had a good time cleaning and we did it because we just wanted it the area to look good and be clean. It was never clean.
And we would keep doing that forever if one day a local villager decide to find the time and spend three weeks “camping” near the lake and warning and educating everyone throwing trash near it, until the signs arrived. As he later told us that most of those throwing trash near the lake were completely convinced that there was a paid service to keep the area around the lake clean. You see, every time they arrived on the location, it was clean. When they arrived again, it was clean again, the mess they left there was cleaned up. They never ever gave a thought of who was cleaning it.
After that, we only cleaned once every month and a half to two months and it was never close to two week dirty as before the signs and a local guy warning people to behave from time to time. And that’s the difference between altruism (just cleaning the trash) and effective altruism (putting the signs and educating the lake visitors).
gjm11 says
Wow, that sure was a productive conversation that just happened.
Anyway. I strongly agree with jo1storm that we should be not only patching up bad outcomes but looking at their causes and seeing whether it’s possible to improve things upstream and not just fix symptoms. I strongly agree with John Morales (and maybe also with jo1storm? it’s not 100% clear) that fixing the symptoms is good even if we can’t, or fail to, fix things further upstream. I don’t know whether jo1storm originally really meant “as long as” literally, but it seems clear that they’re now saying that they meant something more like “it is also super-important to …” which is a perfectly reasonable position.
But asking about the upstream systemic causes doesn’t necessarily mean deciding to try to fix them.
The upstream systemic factors leading to e.g. lots of malaria in sub-Saharan Africa seem like they’re mostly things that are really hard to address. (Sub-Saharan Africa is really poor. Rich countries’ governments are reluctant to send huge amounts of money to sub-Saharan Africa. Sub-Saharan Africa has a climate in which malaria-spreading mosquitoes thrive. Some sub-Saharan African countries have badly dysfunctional governments. Etc.) Going further upstream from “Sub-Saharan Africa is really poor”, explanations for that all seem also hard to address. (Maybe the colonial era did those countries a lot of harm, by removing natural resources or setting up oppressive institutions or something. Maybe disease and parasites make the people there less able to do valuable things. Maybe it’s a vicious positive-feedback loop where poverty means lack of infrastructure, education, etc., which prevents economic productivity, which produces poverty. Etc.)
Maybe some of those things are fixable. (E.g., if the main thing is a poverty feedback loop, then maybe “just” sending many trillions of dollars to … the governments of those countries? Randomly selected people there? Potentially-more-productive businesses? … might break out of it. But getting the entities with trillions of dollars to send them to sub-Saharan Africa is a tremendously difficult project in itself. I don’t know of any instance in which a government has been persuaded to spend anything like that much on anything other than (1) a war or (2) some sort of major welfare project for its own citizens.
Maybe more of them would be fixable if we could “just” replace all the world’s governments and economic systems with radically better ones. Put an end to capitalism and replace it with something else, that sort of thing. Maybe. But this again is a tremendously difficult project, and attempts to do similar things have historically not turned out very well. (Sometimes revolutions work out pretty well in the end; you can plausibly argue that that happened with France and America, for instance. But sometimes you get the USSR or, worse, Nazi Germany.)
All these deal-with-it-upstream options are discouraging enough that I don’t feel inclined to blame anyone who just gets on with paying for distribution of anti-mosquito bednets.
jo1storm says
@gjm11
Conversation with John Morales is never productive. : )
I agree with you. Maybe we are asking the wrong questions after all. Maybe the better question is: “If we create a mosquito net factory in every sub-Saharan country having trouble with malaria and employ the locals, does that solve both the issue and partially solves causes of the issue? Maybe production should be subsidized by donations?” . I don’t know, I am not an expert. I doubt many people are.
Btw, do you know what the (colonial) government’s solution to the malaria problem was? Spray everything with DDT. For decades. It sure solved the malaria problem, because deaths from malaria were lowest ever, measured in hundreds instead of hundreds of thousands. It also caused multiple other problems, including the extinction of hundreds of species of birds and insects, poisoned the environment for decades and more specifically caused cancer in people. A lot of cancer, but not anywhere close to hundreds of thousands death from malaria. Does that count as effective altruism? I’d argue it doesn’t.
jo1storm says
*A lot of cancer, but not anywhere close to hundreds of thousands death toll from malaria.
Swallowed a word there. I should pay more attention, my #69 reply is frankly disgusting. Added words, swallowed letters… Disgusting.
Raging Bee says
Yup, I agree that charities generally don’t have much power to fix systemic injustice and the like. But nor do most governments!
Bullshit. Even the governments of smaller countries have more of that sort of power than private or corporate charities.
And if a government is unwilling to address “systemic injustice and the like,” it’s most likely because the same plutocratic interest-groups who run all those private charities are also opposing and stonewalling any government attempt to meaningfully address the injustices that make all that charity necessary. (Case in point: the Catholic Church.)
gjm11 says
@Raging Bee #73: Sorry, there’s an ambiguity in what I wrote. Let me clarify a bit.
For every government, there are a whole lot of systemic injustices they have power to fix, namely the ones in the place that government governs.
But if you pick a particular systemic injustice, typically most governments can’t do much about it, because they’re somewhere else and don’t have any power to speak of over the place where the systemic injustice is happening.
For instance, police forces in the US are apparently full of awful people who keep murdering civilians who pose them no real threat. The US government could do something about that. The governments of individual US states could do something about it. But other governments can’t do much. If the British or Venezuelan or Mongolian government decided they were tremendously upset about US cops, they couldn’t actually do much to make anything change. Maaaaybe if lots of major countries’ governments all got together and decided to cut back on trade with the US until its police stop murdering civilians, that might do something, but that sort of coordination is really hard to arrange and often doesn’t have any effect other than to make the target country a bit poorer, which tends to hit the most vulnerable people there hardest.
It’s true that almost any government has more resources than almost any individual or charitable organization, of course, but remember the context here: when the question is “what if anything shall I do about X?”, and I’m trying to figure out whether I should be trying to do anything about it myself or whether I should consider it a government’s problem, the fact that the government has a lot more resources has to be weighed against the facts that (1) the government also has a lot of other things it needs to do with those resources and (2) I have very limited power to influence what the government does with its resources, compared with my own.
What plutocratic interest group runs the Against Malaria Foundation? Or any of GiveWell’s other three top charity recommendations? (I think the answer is: no plutocratic interest group runs them. If I’m correct about this, then what you say about “all those private charities” is wrong because it doesn’t apply to a bunch of the charities most relevant to this discussion.)
Raging Bee says
What plutocratic interest group runs the Against Malaria Foundation? Or any of GiveWell’s other three top charity recommendations? (I think the answer is: no plutocratic interest group runs them…)
Do the corporations that run such charities overtly support higher taxes on the rich and big business, and/or greater regulatory powers, for states to more effectively address any forms of systemic injustice?
gjm11 says
@Raging Bee #75: So far as I know the people who run (e.g.) the Against Malaria Foundation have not made any public statements about tax policy.
Do I understand correctly that you’re suggesting that any person or organization that doesn’t publicly argue for higher taxes on rich people and big businesses is a “plutocratic interest group”? Because that doesn’t seem to me like a plausible position, but if you aren’t saying that then I think you’re moving the goalposts.
StevoR says
@72. jo1storm : Welcome to my world.. I could swear the computer changes things around on me after I’ve clicked submit. Sigh. Can relate.
I think / hope that as long as people get the gist and understand what you mean, that’s the main thing?