Nick Bostrom paced about his chambers, agitated and offended. The puling mob had accused him of racism for merely writing Blacks are more stupid than whites
in his youth, and then refusing to admit that thinking entire ethnic groups, nay, even the population of whole continents, could be genetically inferior was racism. It was science! He was not going to repudiate Science!
He must rebuke these irrational people. They are distracting him from his important work. He must deliver a stunning riposte. He considers the most effective way to crush them. He shall accuse them of being…mewling infants? Lowing beasts? No — they are buzzing insects.
Perfect. Accusing those damned SJWs of being a swarm of bloodthirsty mosquitos
will strike exactly the right chord with his fanbase of libertarian/conservative free speech warriors.
Buzz, buzz, buzz.
nomdeplume says
Wait a minute, Bostrom is directing the future of humanity?
lanir says
I guess if your main point is vacuous nonsense it doesn’t matter if your rebuttal is childish sneering.
bcw bcw says
@1 Bostrom is directing the future of humanity in the same way your kid is driving the car in one of those coin operated rides.
As someone quite aware of the things I’m good at: certain types of puzzle solving, and also aware of all the things I am very bad at, the very idea of super-intelligence seems a misnomer – super intelligence in what way, doing what? Google maps is super-intelligent at one thing; ChatGPS is very good at plausible-sounding BS; you can have strengths in different things. The question in all these systems is what is your purpose or goal in life, or in a more AI sense, what are you optimizing for? Something like CHatGPS is optimizing for something like the Turing test, sounding like a human, but imitating humans really well is certainly not a route to any kind of super-intelligence, if even intelligence at all.
Looking at Bostrom’s work, I think the buzzing is coming from inside his head. He seems to be trying to pitch terminator movies as grand philosophy while confusing problems in human social structure (to much power in Elon Musks) with the effects of technology.
wzrd1 says
It’s interesting that he equates detractors to the single animal that’s killed the greatest number of people throughout history and currently, the not so humble mosquito.
May he enjoy the academic equivalent of dengue! Thrice.
An infection by one strain, followed by the other being quite lethal…
John Morales says
Well, I have had a swarm of hungry mosquitoes swarming around me, and it most certainly is distracting. Of course, mozzies literally suck your blood, leave stinging bite marks, and can carry disease. So not the most apposite metaphor.
Prone to magical thinking, I see.
birgerjohansson says
OT
If you want some real research -about human DNA and brain evolution-, check out this.
“Evolution of uniquely human dna was a balancing act”
https://phys.org.news/2023-01-evolution-uniquely-human-dna.html
birgerjohansson says
Goddammit
https://phys.org/news/2023-01-evolution-uniquely-human-dna.html
It is four in the morning, I can hardly coordinate my fingers.
Marcus Ranum says
sometimes I have the impression that the world is a conspiracy to distract us from what’s important
Is it a simulation or a conspiracy?
And wouldn’t a proper simulationist conclude that IQ test performance was a programmed part of the simulation? You can hardly be a racist and a simulationist without also being a jackass.
raven says
I read his Wikipedia entry and there was nothing original or exceptional there.
It’s all stuff I’ve read in science fiction stories since I learned to read in the 1950s.
The AIs are going to take over and kill us all.
The Singularity is going to happen any day now some century or another and the desiccated corpse of Ray Kurzweil will be reanimated and say, “I told you so”.
Vernor Vinge did it better.
Microsoft will get the contract for running our simulation and we will all be locked into the Windows operating systems forever.
It’s not as exciting as the movie Matrix but is more realistic.
Shrug.
I’d never heard of Bostrom before and all I learned since then is that I didn’t miss anything.
He’s a nobody.
A guy who thinks ripping off and repeating pop culture is philosophy.
raven says
Bostrom is behaving like he has tenure and is well connected at Oxford.
He doesn’t have to worry about defending racist statements.
No one important at Oxford seems to care.
And yeah, he is tied up with a whole bunch of crackpots.
The Institute for Reading comic books for ideas shares office space with the Centre for Effective Altruism which is another bunch of lunatic fringers. One of its founders was the uber creep MacAskill. The weird longtermer
If MacAskill was serious about making the present and future of humankind better, he would grab his office mate Bostrom and drop him off in someplace obscure and impossible to get back to the UK from.
I’m getting the impression that something isn’t at all right at Oxford.
John Morales says
raven, it’s privately funded by billionaires; https://en.wikipedia.org/wiki/Open_Philanthropy_(organization)
lakitha tolbert says
Well, how about a little something that destroys the rest of your faith in Science? Or maybe just scientists?
Scientific Racism:
https://medium.com/the-straight-dope/autopsies-politicized-play-an-outsized-role-in-a-corrupted-justice-system-abd7afa7f997
chrislawson says
I love how fighting racism is a distraction from what is important. It really helps convince me of the genuineness of the original apology.
lumipuna says
Flies make the buzzing sound; mosquitoes sound more like tiny violins.
StevoR says
You might be racist if you find anti-Racist critcism like buzzing insects..
birgerjohansson says
I am looking forward to his next comment about the criticism. He just keeps digging.
Sphinx of Black Quartz says
Of COURSE he’s associated with “effective altruism” and “longtermism.” Those selectively-natalist techbro “philosophies” are just a goofy pop-sci-fi spin on white supremacy.
raven says
Longtermism and MacAskill have a lot of problems.
Bostrom BTW, is not just a racist but also a longtermist.
raven says
There is nothing wrong with trying to make the future better for our children. Most of us at least make the attempt ourselves including myself.
The longtermists just use the natural concern for the future as an excuse to be horrible people because they are horrible people to start with.
It becomes an excuse for racism, out of control exploitation of the world and environment by the rich, various lunatic fringe ideas, and ignoring present problems to worry about imaginary people who might exist 1,000 years ago.
Here are a few problematic projects.
Remember one cis white old man from the First World is worth more than anyone from the other 90% of the world.
And Bostrom is just wrong here.
The lives of real people who live today are far more important than imaginary people who lives a million years in the future on Mars or Ceres.
raven says
One more for the road.
The more I read about longtermism, the worse it looks.
They aren’t serious about the long term.
It is just an excuse for right wingnuts to be right wingnuts and lunatic fringers to be loons.
It’s a con and a grift.
The last quote in #19 was from https://www.longtermism-hub.com
Longtermism are cuckoo beliefs covered by flimsy wrapping paper.
The earth has 8 billion people, the climate systems and biosphere are obviously struggling, hundreds of millions aren’t too far away from starvation, and we need to have as many children as possible because the earth might run out of people soon.
raven says
Bostrom is an advocate for eugenics.
An idea that was discredited before I was born.
OK, I now know why I never heard of Bostrom.
.1. Bostrom isn’t a philosopher.
All he is doing is taking common ideas from Science Fiction, popular culture, and comic books and repeating them.
There is nothing novel or original there whatsoever.
It’s all been done many decades ago and far better than this clown.
.2. Bostrom is a conperson and this is just a grift.
Follow the money.
He gets a big pay check for telling the tech bros they are the Crown of Creation and the future of humanity. It’s indoor work with no heavy lifting.
lanir says
I think you’ve nailed it raven. I guess this is a reminder that even if you’re what most people might consider to be conventionally smart it doesn’t make you immune to cons. It’s the same as looking down on people who click a link in a spam email and then falling for a standard phishing email that’s just tailored more toward your interests. All the while talking about them like they’re two completely different things.
That email example is made up but it captures the attitude more precisely than any story I could relay. I’ve known plenty of people who are capable of something like that in the IT field. The worst part is some of them seemed incapable of learning and getting over it.
StevoR says
@7. birgerjohansson : “It is four in the morning, I can hardly coordinate my fingers.”
I can relate. It isn’t 4 a.m. here now but there’s a lot of times its been almost as late at night / early morn for me when I can’t sleep and have been commenting here. Itcan certainly result in some tyois and other errors.
petesh says
@21: 1. Techno-eugenics is a thing. It’s dumb as well as cruel but the fact that eugenics was discredited decades ago does not mean that we can now ignore it. New names (some mere hyphenates) get painted on the same old rusty frame.
2. Also, of course Bostrom is a philosopher. You and I disagree with him vehemently but he has all the credentials to claim the title, in the moral cesspit that is Oxford (where I got my undergraduate degree) and many similar institutions, which validate each other.
3. I think he is sincere. He has been pushing varieties of this stuff for decades now. The oldest cite I have at my fingertips is from the New York Times in 2007: https://www.nytimes.com/2007/08/14/science/14tier.html. My published commentary on that (https://www.geneticsandsociety.org/biopolitical-times/he-real-are-you) concludes:
Yup, I stand by that. But he has an audience, immense self-belief and unfortunately a platform.
Raging Bee says
So basically when Nick Bostrum hears any sort of criticism, he simply pretends he only hears a buzzing noise. Yeah, that’s how real innelekshals respond to criticism, innit?
Raging Bee says
2. Also, of course Bostrom is a philosopher. You and I disagree with him vehemently but he has all the credentials to claim the title…
And those are…?
John Morales says
Raging Bee, well…
“Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University”
https://www.philosophy.ox.ac.uk/people/nick-bostrom
unclefrogy says
I see he is a good example of someone living in an ivory tower and being out of touch with reality, I see it clearly, he even is a paid resident!
raven says
I’m just going to repeat what I already concluded about Bostrom.
If that is philosophy, then you have drastically lowered the bar on who gets to call themselves a philosopher and also on what philosophy is.
And what about where he gets all his idea?
That makes all those Science Fiction writers, Comic Book writers, and popular culture creators such as movies, articles, Youtube, internet, etc. into philosophers as well.
That is OK. Lowering the bar makes a huge number of people into philosophers.
Including I/we on this thread.
I’ve read Asimov, Pohl, Vinge, etc. as well as Wikipedia and some articles from magazines.
I’m certainly now well qualified to criticize Bostrom, the unimaginative hack philosopher.
If you call guys like MacAskill and Bostrom philosophers, you have to add the adjectives hack, second rate, derivative, intellectually mindless philosophers.
“All he is doing is taking common ideas from Science Fiction, popular culture, and comic books and repeating them.” Really, a high school kid could do that.
raven says
If you look at where Bostrom gets all his idea, they are from ancient sources, before I and he were even born. Here is one example. R.U.R. was written in 1920.
The killer robots are going to take over and kill us all.
In the first story where the word robot appeared, they ended up killing off the human species.
Asimov was writing his positronic brain robot stories in the 1940s.
As previously mentioned, I’m a philosopher (as of an hour ago) not a historian.
I’m sure if you look at Bostrom’s other ideas such as eugenics, the Singularity, the Matrix movie that he calls the Simulation(s), they are old ideas from someone else.
I’m going to have to go to the library and ransack their collection of graphic novels for my next paper on philosophy. I’m not sure where Dark Seid and Apocalypsos (Superman characters for those not up on the latest) fits in with our future but I’m sure DC comics has something to say about it.
raven says
The idea of autonomous robots is actually very old and widespread.
It dates back in one form or another to the ancient Greeks, Egyptians, and Hindus.
When robot assassins hunted down their own makers in an ancient Indian legend
StevoR says
The giant of Talos?
StevoR says
See with Europa :
https://en.wikipedia.org/wiki/Talos
Not to be confused with Europa the Galilean moon of Jupiter named after the girl abducted by Zeus in the guise of a white bull… rather than the swan that that gave birth to Pollux, Castor, Helen & Klytemnestra from a clutch of human / avian dino eggs.
Abe Drayton says
Bostrom deez nuts.
basementboi says
What a horrible comments section. You know, there are diffrent ways to read philosophers and one of the worst way is reading them through your political lense. This is intellectual lazyness, that reminds me of those anti-vaxxers who do not want to engage with the actual science behind vaccination, but rely on conspiracy theories.
Now, regarding Nick Bistroms remark about blood sucking insects, is not racist but has a antisemitic connotation, that is indeed hard to ignore. So, if you want to read his writing through a political lense, at least do it the right way.
Olivier Audet says
I know several smart people who have gravitated (mostly in the past afaik, thankfully) around the whole sort of Bostrom, LessWrong, Effective Altruism milieu, and I always wonder why. Bostrom as a philosopher isn’t totally uninteresting but his whole shtick seems predicated on an extremely dubious use of statistics which to turn complete thought experiments into ethical imperatives. By the time you reach the end of one of his reasonings, he’s piled complete unknown on complete unknown, so high it’s completely impossible to evaluate if most of his claims entertain the slightest relationship to reality at all.
We are supposed to accept that based on the postulate that even an extremely improbable scenario must be taken seriously if its ethical consequences are serious enough (essentially: I can take a course of action that I know with 100% certainty will lead to a moral wrong if the wrong is negligible, but I should shy away from taking a course of action that has even a small chance of leading to an absolute ethical catastrophe. The more terrible the ethical consequences, the lower the acceptable odds, the further I should go to avoid or prepare for the worst-case scenario). A decent heuristic in many cases. Problem is, what are the odds of his scenarios being true? For the most part, the answer as far as we know is ????, and he piles them on top of each other so that in order to calculate the odds of the final outcome being correct, you have to multiply ???? by ????.
He solves that by saying “ah, yes, but the ethical outcome would be so catastrophically bad, that even if ???? turns out to be an infinitesimally small number, you should still act as if the scenario were true and seek to avoid it”. The truth, of course, is that you can come up with an infinite number of horrifying, ethically abysmal potential futures starting from credible next steps, and the reason why they should not be acted on isn’t that they’re insufficiently bad scenarios, it’s simply that they’re effectively fiction.
The most genuinely interesting way to read Bostrom is to take much of his work as an attempt to craft the most serious philosophical arguments possible about what seem like science-fiction scenarios – in other words, the most interesting way to read him is not to take him too seriously. Unfortunately, he takes himself very seriously and he is very invested in surrounding himself with people who take him very seriously.
KG says
basementboi@35,
Bostrom’s work has blindingly obvious political implications, so it is impossible to read his work intelligently without using a “political lens” (note: “lens” has no final “e”). And antisemitism is a form of racism, so your second paragraph is otiose.
Olivier Audet@36,
One “low probability but catastrophically bad” scenario Bostrom might consider is that his entire population of umptyzillion simulated people end up being horrifically tortured for trillions of years by an evil tyrant. Since the only way we can be certain of avoiding this is by bringing about human extinction (better sterilise the entire planet in case a technological species re-evolves), that’s where Bostrom should be focusing his efforts. Indeed, maybe he’s already reached that conclusion, and all the guff he comes out with is just to distract us from the real plan…
Olivier Audet says
KG@37
Ah, but he’s decided that a life is better than no life, and because there’s a potential sci-fi post-scarcity future where billions upon billions of humans can live fulfilling life, we should base all kinds of ethical decisions today on the impact Bostrom and friends think those decisions will have on this possible future, based on a chain of events that is both highly uncertain and almost completely inscrutable. Entirely coincidentally, the ethical decisions we should make to get there are highly attractive to a certain number of plutocrats who are eager to promote Bostrom and his views and collaborators. If you’re not willing to let the poor die today, there’s a completely unknowable chance that billions of people won’t get to live in post-scarcity sci-fi paradise, and that’s on you!
(At first glance this seems like it could make for a highly convenient argument against abortion, but Bostrom concedes that we don’t currently live in post-scarcity paradise and family planning is probably overall an ethical good. Support for eugenics is, I guess, an unfortunate side effect.)
Olivier Audet says
KG@37
I guess I just repeated a lot of obvious stuff about Bostrom’s arguments though. I suppose my point is, there’s a lot less money and prestige in telling rich people to, more or less, on the long term, kill themselves (I would say he could try quoting Keynes at them but the rich don’t like that either).