Are these people for real?


Carl Court/AFP/Getty Images

Carl Court/AFP/Getty Images

I’m afraid they are. Google sponsored a conference on “Effective Altruism”, which seems to be a code phrase designed to attract technoloons who think science fiction is reality, so the big worries we ought to have aren’t poverty or climate change or pandemics now, but rather, the danger of killer robots in the 25th century. They are very concerned about something they’ve labeled “existential risk”, which means we should be more concerned about they hypothetical existence of gigantic numbers of potential humans than about mere billions of people now. You have to believe them! They use math!

To hear effective altruists explain it, it comes down to simple math. About 108 billion people have lived to date, but if humanity lasts another 50 million years, and current trends hold, the total number of humans who will ever live is more like 3 quadrillion. Humans living during or before 2015 would thus make up only 0.0036 percent of all humans ever.

The numbers get even bigger when you consider — as X-risk advocates are wont to do — the possibility of interstellar travel. Nick Bostrom — the Oxford philosopher who popularized the concept of existential risk — estimates that about 10^54 human life-years (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers.

Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”

Put another way: The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people. That argues, in the judgment of Bostrom and others, for prioritizing efforts to prevent human extinction above other endeavors. This is what X-risk obsessives mean when they claim ending world poverty would be a “rounding error.”

Wait. Turn those numbers around. If they want to save 1052 future people, and there are roughly 1010 people living now, doesn’t that mean that each child is the potential progenitor for 1042 hypothetical, potential, future human beings? And that if we’re really taking the long view, with math, we should regard every child dead of malaria as the tragic, catastrophic death of 1042 Futurians?

No, not at all. That would require a collection of Silicon Valley millionaires and billionaires to think about an immediate problem, rather than ignoring pressing concerns to focus entirely on imaginary, unpredictable futures. So forget malaria, or coastal flooding, or environmental degradation — we need to deal with the rogue artificial intelligences.

What was most concerning was the vehemence with which AI worriers asserted the cause’s priority over other cause areas. For one thing, we have such profound uncertainty about AI — whether general intelligence is even possible, whether intelligence is really all a computer needs to take over society, whether artificial intelligence will have an independent will and agency the way humans do or whether it’ll just remain a tool, what it would mean to develop a “friendly” versus “malevolent” AI — that it’s hard to think of ways to tackle this problem today other than doing more AI research, which itself might increase the likelihood of the very apocalypse this camp frets over.

The common response I got to this was, “Yes, sure, but even if there’s a very, very, very small likelihood of us decreasing AI risk, that still trumps global poverty, because infinitesimally increasing the odds that 10^52 people in the future exist saves way more lives than poverty reduction ever could.”

AIs of the nature that concerns them don’t exist. This is an imaginary problem. There are good reasons to think it’s an overblown concern, and that it is an unpredictable issue that is unlikely to develop in an expected direction (which does not mean it’s safe, of course, but that building contingencies now to deal with situations that don’t exist and are likely to be completely different than you anticipate is really stupid). You could be building a Maginot line against one kind of threat and then discover that the AIs don’t really respect Belgian independence after all.

So rich people are throwing tens of millions of dollars at institutes making plans on how to fight the Killer Robots of the Future, rather than on real and immediate concerns, and they’re calling that Effective Altruism.

I call it madness. But it’s great profit for the prophets of AI, at least.

Comments

  1. says

    It may not be an entirely insane approach if you consider marginal utility. If a billion dollars are already being thrown at the problem of coastal flooding, then adding a million more probably won’t make a significant difference, regardless of how likely or severe the problem is. But if only a thousand dollars are being thrown at the problem of unfriendly AIs, then throwing a million dollars in that direction may not be the worst use of that million dollars, even if the odds of that problem actually being a problem are low.

    As an aside, when I’ve heard the term ‘effective altruism’, I’m usually pointed to the site GiveWell.org, whose top charitable cause has been buying bed-nets to fight malaria, in that the available data indicates that it provides the most saved-lives per donated dollar. There are worse things in the world than people trying to be altruistic effectively.

  2. says

    I am currently very worried about the problem of giant Minnesota mosquitos arming themselves with machine guns. There is currently no money being spent on this problem. Throwing a million dollars in the direction of my Institute Against Mosquito Machine Guns wouldn’t be the worst use of that money.

    Please inform Bill Gates, Elon Musk, and sundry venture capitalists of their opportunity to make a particularly effective donation to my cause.

  3. blf says

    Spending a dollar to work out how to stop the Vogons must be infinitely more effective than the current spend of zero, despite the fact Vogons are entirely imaginary.

  4. says

    PZ Myers wrote:

    > There is currently no money being spent on this problem. Throwing a million dollars in the direction of my Institute Against Mosquito Machine Guns wouldn’t be the worst use of that money.

    Would you be willing to make a Fermi estimate of the rough order of magnitude of likelihood that machine-gun-armed mosquitoes will be a significant problem? Do you think other people would think that your estimate is in at least generally the right ballpark? Would you be willing to trust the estimate of a mosquito expert as being more likely to be more accurate than your own estimate?

  5. aziraphale says

    What these people are missing is that other existential risks are also a threat to all those future lives. Giant asteroid impact, for instance, or runaway global warming.
    And..what makes them think we can expand unopposed through the entire galaxy? If interstellar travel is easy, won’t someone else have got there first?

  6. consciousness razor says

    PZ:

    I’m afraid they are. Google sponsored a conference on “Effective Altruism”, which seems to be a code phrase designed to attract technoloons who think science fiction is reality, so the big worries we ought to have aren’t poverty or climate change or pandemics now, but rather, the danger of killer robots in the 25th century.

    That may be what it seems like to you, although the article you cited doesn’t even give that impression, but effective altruism itself has nothing to do with any of that. The basic idea is to learn about (and care about) the best, most effective ways to do the most possible good. For example, you shouldn’t give to a charity that will waste lots of its money, or to a charity that won’t actually help anyone very much even if it “wasted” nothing relative to its goals, compared to giving to some other charitable cause. It’s such a simple and common sense idea that it really doesn’t need a name, but some people love using self-important buzzwords, with technoloons being some of the worst of that lot.

    Of course, if somebody is more worried about killer robots than poverty or climate change or pandemics now, that is a problem. However, it’s not a problem with the concept of effective altruism itself, since in fact that isn’t designed for technoloons and says nothing about killer robots — it’s a problem with that person’s (or that group’s) mistaken beliefs about reality. Even if you had the best system imaginable (which is not to say this style of thinking must offer that), starting with garbage as your input isn’t going to lead to good results, because the best system imaginable doesn’t need to be capable of turning your shit into gold. It needs to be helpful if used properly, not do magic tricks for you.

    The numbers get even bigger when you consider — as X-risk advocates are wont to do — the possibility of interstellar travel. Nick Bostrom — the Oxford philosopher who popularized the concept of existential risk — estimates that about 10^54 human life-years (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers.

    That’s fucking ludicrous. He should learn some astronomy and cosmology before using numbers like that. I have no idea how anybody (or anything to be concerned about) could survive many many trillions of years after the death of every last star in the observable universe, because that’s the kind of timescale he’s talking about. What’s going to be left then? Where will they go? Will our AI descendents just float around black holes, to keep themselves warm, safe, energized, etc.? How the fuck could that possibly work? And where the fuck did he even get an “estimate” like that in the first place?

  7. Rich Woods says

    And..what makes them think we can expand unopposed through the entire galaxy? If interstellar travel is easy, won’t someone else have got there first?

    Tch. You’re not taking into account HomSap Exceptionalism. It’s our manifest destiny to populate the galaxy, regardless of what any bunch of savages with their heathen fusion rockets who got there first might think.

  8. Pierce R. Butler says

    Worried about killer robots? Start a campaign to defund the Pentagon, Lockheed, Raytheon, et al.

    You’re welcome!

  9. Rich Woods says

    @consciousness razor #7:

    or to a charity that won’t actually help anyone very much even if it “wasted” nothing relative to its goals, compared to giving to some other charitable cause.

    Please help me out here by putting a value on a human life, or preferably a set adjusted for life expectancy in all the countries in the world. I could then do with a similar rating for human misery, so I know whether or not to prioritise a London housing charity over an orphanage in Uganda. I imagine everyone will then choose to act the same way as me, being rational agents, but hell, we’ll make it the world’s best damn orphanage!

  10. applehead says

    Well, would you look at that. Fancy meeting you here, DataPacRat. You may not know me, but I sure know you! Let me inform the rest of the commentariat.

    http://www.datapacrat.com/sketches/rationality.html

    This guy is part of the LessWrong crowd. You know, the STEMlords who grouped around their guru- I mean, thought leader Yudkowsky to stop the inevitable sooper-human AIs from becoming Skynet. And trying to sanewash their stark insanity by labelling it “rationality,” of course.

  11. says

    aziraphale wrote:

    > If interstellar travel is easy, won’t someone else have got there first?

    The more interesting question is, wouldn’t someone else have gotten /here/ first?

    consciousness razor wrote:

    > What’s going to be left then? Where will they go? Will our AI descendents just float around black holes, to keep themselves warm, safe, energized, etc.? How the fuck could that possibly work? And where the fuck did he even get an “estimate” like that in the first place?

    If you actually want the answers to those questions, I could try helping you figure them out. But if they’re merely rhetorical sneers, that would seem to be a bit of a futile effort on my part.

    Rich Woods wrote:

    > Please help me out here by putting a value on a human life

    You could do worse than to start at https://en.wikipedia.org/wiki/Quality-adjusted_life_year and trawling through the related links. You could also do worse than to look at http://www.givewell.org/international/top-charities/AMF#Costperlifesaved , with the figure of about $2800 per life saved.

  12. consciousness razor says

    I have no idea how anybody (or anything to be concerned about) could survive many many trillions of years after the death of every last star in the observable universe, because that’s the kind of timescale he’s talking about.

    Or, if Bostrom thinks “our population” (i.e. emulated human brains) will explode to an enormous size somehow, then maybe he’s implying they’ll only last a few centuries. But he’s usually a lot more optimistic than that, and it’s still just a bunch of speculative bullshit. There’s no way to know what he’s predicting or whether he’s predicting anything in particular.

  13. says

    applehead wrote:

    > Well, would you look at that. Fancy meeting you here, DataPacRat. You may not know me, but I sure know you!

    Pleased to make your acquaintance.

    > http://www.datapacrat.com/sketches/rationality.html

    Eh, it was the best I was able to come up with five years ago. I’d like to think I’ve learned a thing or two since then.

    > This guy is part of the LessWrong crowd.

    I’ll cop to that. Though LessWrong itself has become somewhat moribund in recent years.

    > You know, the STEMlords who grouped around their guru- I mean, thought leader Yudkowsky to stop the inevitable sooper-human AIs from becoming Skynet. And trying to sanewash their stark insanity by labelling it “rationality,” of course.

    Eh, it’s always easy to describe a group you don’t like disparagingly. I remember when Encyclopedia Dramatica alone was the main hub of putting the worst spin on every identifiable group, but it’s become somewhat of an internet pastime. I’ll cop to being a weirdo in all sorts of ways, but trying to tar me as having every negative attribute attributed to an online community of rather vague membership at best isn’t really the most productive rhetorical technique.

  14. tbtabby says

    When I hear “altruism,” I tend to think of someone donating to charity or helping a stranger in need. I do not think of the kind of person who responds to every single advance in robotics with a rant about how we’re one step closer to the ROBOT APOCALYPSE!!!!! It seems you can’t even build a robot that can tie shoes without some jackass saying that it’s going to be tying nooses for us when the ROBOT APOCALYPSE!!!!! comes.

    What gets me most about these robot doomsday preppers is that they put so much effort into thinking about the how of a robot apocalypse scenario, but they never seem to consider the why. They think that if robots achieve sentience, they will inevitably decide to kill all humans. Why will they decide this?

    Because they perceive humans as a threat to their existence? Why will they decide this? Because that’s what Skynet did? You are aware that the Terminator movies aren’t documentaries, right? If you’re worried about robots killing you because they perceive you as a threat to them, maybe you should endeavor NOT to be a threat to them, so that they won’t decide to kill you. Not ranting about the robot apocalypse all the time would help in that department. And why do these scenarios always assume that ALL robots will participate in the extermination of humanity? Who’s to say there won’t be disagreement among the robots? Some might even decide to fight on the side of humanity.

    Slave labor? This one is absurd on its face: they could just build their own robot slaves to do the same work more efficiently. And they would be much easier to maintain.

    Organic batteries? What I said about the Terminator movies also applies to the Matrix movies; Even moreso, because organic batteries are a terrible idea. The robots would expend far more energy keeping their organic batteries alive and healthy than they would get out of them.

    Personally, I think the most likely reason they’re so sure that a robot apocalypse a la Terminator is inevitable is the one Commander Vimes gave in Feet of Clay: because, deep down, they think humanity has it coming.

  15. consciousness razor says

    Rich Woods:

    Please help me out here by putting a value on a human life, or preferably a set adjusted for life expectancy in all the countries in the world. I could then do with a similar rating for human misery, so I know whether or not to prioritise a London housing charity over an orphanage in Uganda.

    I’m not sure I understand why I should help you with that, or what the problem is supposed to be. Is there a problem with wanting to be effective when you intend to do good things, as opposed not attaining the good effects that you intended? We don’t need to do any simplistic utilitarian calculations here, because even though that’s a caricature it’s not the game we’re playing anyway. If there’s nothing to value about whether or not your actions have the kinds of effects you (ostensibly) wanted them to have, then I think you should try to explain to others (or just yourself to begin with) what’s supposed to be reasonable about that or how you came to that conclusion.

    I imagine everyone will then choose to act the same way as me, being rational agents, but hell, we’ll make it the world’s best damn orphanage!

    Why imagine that? Is everyone in the same circumstances as you are? If not, they’d have different reasons for acting differently. Or if nothing at all causes us to choose or will something, then that doesn’t even make sense on its own terms, and it’s certainly not any kind of rationality if it’s being stipulated that there are no reasons.

  16. Bob Foster says

    Lets see, I survived the Cold War, Y2K and the 2012 Mayan apocalypse. The next really, really plausible apocalyptic even is Rampaging Autonomous Machines killing billions of us hairy, milk sucking hominids . . . at some as yet to be determined future date. I think I’ll go back to watching gigantic men in silly uniforms and useless helmets giving themselves brain damage. Now that’s truly scary.

  17. says

    tbtabby wrote
    > They think that if robots achieve sentience, they will inevitably decide to kill all humans. Why will they decide this?

    Assuming that you actually wish an answer, and since someone else has introduced LessWrong into this thread: https://wiki.lesswrong.com/wiki/Paperclip_maximizer . Ie, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” If you want a more serious read, you could try https://intelligence.org/files/FormalizingConvergentGoals.pdf .

  18. Rich Woods says

    @consciousness razor #17:

    I’m not sure I understand why I should help you with that

    It was rhetorical, but never mind.

    Is there a problem with wanting to be effective when you intend to do good things, as opposed not attaining the good effects that you intended?

    Isn’t there a risk that supporting the more cost-effective charities will reduce support for charities who are tackling more complex problems and can’t quantify a clear or immediate cost benefit? Ultimiately, they might be the more important ones to support in the first place, because they attempt to deal with problems others shy away from due to, say, difficulty or danger?

  19. says

    Oh yeah, they’re for real. Effective Altruism (or EA) is known for evaluating charities to find the ones which are most effective and cost-efficient (it turns out to be malaria nets), and for creating a social environment of giving. This is commendable. But inexplicably it’s mixed in with concern about existential risk, and inexplicably the ex-risk they care about is AI rather than, say, nuclear war or global warming.

    It makes more sense when you realize that EA arose from the LessWrong/Yudkowski circle. Yudkowski was a decent popularizer of the rational/skeptical thinking practices, but then he also spent much time arguing that the rational person would donate as much money as possible to Yudkowski’s own organization (currently known as MIRI), dedicated to the problem of AI. The great thing about EA is they realized that this isn’t necessarily the best use of money, but then they still think of it as within the range of reasonable views to hold.

  20. says

    I am not against the combination of effectiveness + altruism — in fact, that’s a really good idea. My problem is with the wacky X-risk nonsense, which is kind of the opposite of effectiveness.

  21. throwaway, butcher of tongues, mauler of metaphor says

    If Hitler arises once every 108 billion people, as is our current experience, then that means there will be approximately 9.26e+43 Hitlers that come about in our future. That means that there will be 5.5e+50 people who will die due to a genocidal maniac. Clearly future Hitler is the biggest threat of all.

  22. says

    #5: yes. 1 in 100,000. A million dollars is a bargain.

    My wife would agree with me, as would other members of my institute, who will get a cut.

    I am an expert, as I live in Minnesota, am a biologist, and have fired a machine gun.

  23. says

    I suspect all this pontificating about existential threats that may never exist is mostly about ego–staking out your position in history as being one of the foremost thinkers on a issue that will one day confront humankind. It’s exceedingly human to seek immortality of some sort…

    I tend to agree that such worries about AI are ludicrously overblown. AI isn’t going to happen overnight. Reality rarely plays out like in the movies. I enjoyed Ex Machina last year, but only because I suspeded disbelief that a single, maverick scientist could create sentient robots and keep his creations secret from the rest of the world.

  24. says

    PZ Myers wrote:

    > 1 in 100,000.

    An interesting answer.

    Do you believe that you have practiced such estimates often enough that your estimate is well-calibrated, instead of being little more than a placeholder for “I don’t think it’s going to happen”? Or, put another way, do you really think that machine-gun-armed mosquitos are ten times more likely to become a problem than your winning a one-in-a-million lottery?

    Or, put yet another way – why should I trust your estimates on such risks more than that of people who’ve spent more than five minutes thinking about them?

  25. consciousness razor says

    Isn’t there a risk that supporting the more cost-effective charities will reduce support for charities who are tackling more complex problems and can’t quantify a clear or immediate cost benefit?

    Well, yes, I take it that the risk is 1, given the way you put it. Anyway, I’ll grant for the sake of argument that it’s as certain as you need it to be.

    Would you say that it’s a true statement, that some charities should be supported more than others? You’re implying below that some could be more important, so that’s apparently not the issue. Whatever sort of decisions you make could also undermine more important ones, by reducing support for them. Since nobody can do everything no matter how they’re evaluating their situation, how is this a genuine problem which is also specific to those who are concerned about effectiveness?

    Ultimiately, they might be the more important ones to support in the first place, because they attempt to deal with problems others shy away from due to, say, difficulty or danger?

    Anything that’s not contradictory might be the case. Why suspect something like that, if the only condition you’ve mentioned so far is that they’re supposedly unquantifiable? And why’s that being associated with difficulty or danger?

    If we simply have no way at all to express, in reasonably definite terms, “the value of a human life” (or any other sort of life, or other things we value besides merely being alive), then I guess what you’re telling me is that “more cost-effective charities” don’t exist, because there are no actual and definite costs/benefits or effective strategies, as far as anyone can tell. If you want to bite that bullet, go ahead I guess, but if so then it’s still not clear why one kind of charity would be supported over another, since that would apply to all of them categorically. It just goes back to saying that some people are going to possibly make the wrong decisions — well, yes, they might, but they can’t (and you can’t) do it all anyway. And it isn’t a decent option to pick supporting no charities, instead of supporting what you have reasons to believe (perhaps mistakenly) are better charities. So, assuming you’re not telling us to despair and do nothing, what exactly is anyone supposed to do?

  26. says

    Giant asteroid impact, for instance

    Ironically, this is one of the few existential threats we’re very close to eliminating. Almost all the risk of being taken by surprise by a planet-killer has already been eliminated, and should a longer term threat be identified in the near future, I am confident we will pool our resources and eliminate it. The technology is already there.

    (An impact from a long period comet is harder to avert, given the shorter timeframe, but compared to, say, preventing the eruption of a supervolcano like Yellowstone, it’s still more doable.)

    And..what makes them think we can expand unopposed through the entire galaxy? If interstellar travel is easy, won’t someone else have got there first?

    At this stage, there is little reason to believe that historical rules of territorial conquest and acquisition as practised here on Earth apply on the galactic scale. A civilization capable of sweeping across the Milky Way is unlikely to be dependent on resources that are in short supply.

    I tend to believe that intelligent life is rare enough that any encounter with another intelligent species will be a cause for celebration. After all, when you”ve found out all you can about the Universe around you, what’s left to explore but the imagination and cultural resources created by other intelligent minds?

  27. Holms says

    DataPacRat,
    Do you commonly spend time fretting about a) the number of angels dancing on the head of a pin, and b) the potential health risks in foot injuries to the poor dears as a result of said activity?

  28. says

    I confess to being a horrible, horrible person, probably a mass murderer. How so, you ask? Well, I have not conceived and born all the people I could have, therefore denying millions if not billions of future people their existence. You do the math.
    I am, actually, pretty unconcerned about possible people in distant futures. I am pretty concerned abut the actual existence of people living now and the existence of people in the near future. Yeah, even of potential people in the near future, which is one of the reasons I don’t spawn a bunch of people who will live in misery and poverty for the benefit of some potential far future offspring. Because future humanity will descend from the people living right now and personally I go for quality of life over quantity. So, yeah, reducing that number of people living in the year 40.002016 by a few digits so that every child can get food, vaccines and school in 2020, that’s fine with me.

    P.S. SciFi movies are not data points about the likelihood of the robo-apocalypse

  29. says

    Holms wrote:

    > DataPacRat,
    > Do you commonly spend time fretting about a) the number of angels dancing on the head of a pin, and b) the potential health risks in foot injuries to the poor dears as a result of said activity?

    Angels, no. But I do consider risks that other people tend not to, and depending on my considerations, I occasionally take actions that other people would decry as not worth the time or effort. For example, our good host and I have starkly different opinions on the feasibility of cryonics; his opinion is at https://proxy.freethought.online/pharyngula/2015/04/16/how-to-live-forever/ , while after my own research, I’ve peg the likelihood of somebody cryo-preserved with today’s techniques ever being revived as on the order of three to five percent. Another difference between my own approach and that taken by the majority of people is that I’m willing to take such estimates seriously, when I decide I have the best feasibly-available evidence to base those estimates on, and to act based on that information. To wit, I value my life as being worth more than $6000 per year, and so by running the simple cost-benefit analysis, have acted as seems appropriate.

  30. eggmoidal says

    You know, if man survives long enough, he may invent a time travel machine. According to me, the development of a time travel machine could spell the end of the human race. So here’s the plan, you guys get to work on saving humanity and the planet, and I’ll get to work on anti-time travel technology. The goal is an effective time-travel dampening field. Please send me a billion dollars seed money. Otherwise 2 * 10**52 people could die (twice as many as will die of Full AI – trust me, my numbers don’t lie). You will know it is working as long as no time travelers pop up.

    Normally, one needs to read the back of a Dr. Bronson soap bottle to encounter such nonsense.

  31. chigau (違う) says

    DataPacRat
    Doing this
    <blockquote>paste copied text here</blockquote>
    Results in this

    paste copied text here

    It makes comments with quotes easier to read.

  32. says

    Gilliel@#34:
    I confess to being a horrible, horrible person, probably a mass murderer. How so, you ask? Well, I have not conceived and born all the people I could have, therefore denying millions if not billions of future people their existence.

    You’ve also saved them from dying. And some of those deaths would have been horrible. And think of the fossil fuels and burritos they would have consumed. You are a mighty philanthropist!

  33. screechymonkey says

    These risk calculations remind me a little of the argument over whether religion, or the Bible specifically, is a source of morality. It sure looks like these folks are using invented numbers to justify the conclusion that they already wanted to reach, i.e. that the problems they find more fascinating are the ones that most need to be addressed.

  34. Menyambal - "Bah! Humbug." says

    Tacitus mentioned this in #28, but I think it needs more emphasis. The supporters of this stuff are really convinced that they are not only smarter than everyone alive now, but they are smarter than anyone who will ever live.

    Look, guys (I assume it is all guys), if each generation takes best care of their current generation, with some consideration of the future, we’ll probably muddle through. If we neglect current needs, we may well be depriving the future of the wonderful work of that kid currently taking brain damage in your local slum.

  35. consciousness razor says

    DataPacRat:

    For example, our good host and I have starkly different opinions on the feasibility of cryonics; his opinion is at https://proxy.freethought.online/pharyngula/2015/04/16/how-to-live-forever/ , while after my own research, I’ve peg the likelihood of somebody cryo-preserved with today’s techniques ever being revived as on the order of three to five percent.

    Heh, skimming through that thread again is sort of amusing, but also kind of painful. You were the one who wanted to avoid death somehow, to the point where you claimed to be withholding judgment about whether literal immortality is possible. And you assumed that the moral thing to do consists of perpetuating your own individual existence, without regard to the consequences to anybody or anything else. It’s really amazing that people can be so confident (or at least obtuse) about such ridiculous fucking premises.

    No clue whose ass you pulled those numbers from, but I doubt you’ve factored in the chances that the rest of the planet will have no good reason to play along with your bullshit schemes. That does indeed lower the odds significantly. Nothing has changed, I guess… you’re still being a selfish bullshitter about it, yes? Aren’t you supposed to be disagreeing with doing what’s most likely to have a positive effect, as well as placing little or no value on altruism itself? Am I mistaken, or are you confused about the meaning of one or both concepts?

  36. says

    Wow. They seriously go out of their way to dismiss poverty, and “hey, it would be a bad thing to try and fix it!” Fuckin’ A.

  37. says

    chigau wrote:

    It makes comments with quotes easier to read.

    Thank you for the reminder. :)

    consciousness razor wrote:

    you’re still being a selfish bullshitter about it, yes? Aren’t you supposed to be disagreeing with doing what’s most likely to have a positive effect, as well as placing little or no value on altruism itself? Am I mistaken, or are you confused about the meaning of one or both concepts?

    I’m selfish, and want to avoid dying if that’s possible. I’m not a cartoon-villain moron, so I know I’ll die in a variety of ways without a society full of other people, so I have a vested interest in doing anything I can think of that I predict has a measurable chance of reducing X-risk. I’m human, and have a reasonably working set of mirror neurons, which lead me to empathize with other people’s suffering when it’s brought to my attention, which can lead me to trying to think of something to reduce such suffering. I have a finite income – for simplicity’s and privacy’s sakes, I’ll just describe it as “below minimum wage” – and so I have to decide what to spend my money on to best achieve my various conflicting goals.

    “Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.”

  38. Holms says

    DataPacRat,
    If a religious apologist were to use the famous argument known as Pascal’s Wager in an attempt to get you to convert, how would you respond?

  39. Holms says

    Wait, below minimum wage… and spending $300 a year on uselessness? This right here is an example of the harm wrought by spurious ‘alternative therapies’ and related bilge: the mere expenditure of resources that are already stretched.

  40. says

    Holms wrote:

    DataPacRat,
    If a religious apologist were to use the famous argument known as Pascal’s Wager in an attempt to get you to convert, how would you respond?

    That there are a number of reasons why Pascal’s analysis of that problem fail to live up to the standards of even basic decision analysis or game theory. (Remember, just because a problem can be described with a 2×2 set of boxes, and one of the boxes has a low probability of a large reward, doesn’t mean the problem is actually a Pascal’s Wager.)

  41. says

    Giliell @ 34:

    I confess to being a horrible, horrible person, probably a mass murderer. How so, you ask? Well, I have not conceived and born all the people I could have, therefore denying millions if not billions of future people their existence.

    Uh oh. That makes me seriously evil then, what with deciding I never, ever wanted to contribute to the pool of humans.

  42. says

    PZ@0:

    I call it madness.

    I don’t know about madness, so much as a side-effect of having huge wealth. If you’re wealthy, there’s few immediate problems that can’t be dealt with by throwing a small or negligible proportion of your wealth at them, so I posit that one can get used to not thinking of immediate problems at all. You have an accountant to take care of yearly bother with taxes, and a bod to take care of your investments on a day-to-day basis; you don’t worry about food, electricity, health (to any great degree), your kids’ education, or much of anything besides social status markers. So immediate problems become trivial to deal with, and the only ones worth bothering with are long-term ones. With a sufficiently long view, one might assume that even such things as climate change are too short-term to bother with, as someone else will certainly take care of that, or one can deal with it when it becomes too big to ignore.

    To anyone who has to occasionally worry about whether they are really going to be able to meet the grocery bill at the end of the month, it seems a delusional view, and is almost certainly a wasteful one, which they can only get away with because they have so much wealth.

  43. says

    Holms wrote:

    uselessness?

    You may feel that preparing for a circumstance that’s only 5% likely to happen is useless; I don’t. You may feel that what I’m spending my money on has less than a 5% chance of working; I disagree. If you really wish to discuss this matter, I’d suggest we arrange to meet in a forum more appropriate than a comment thread on a post about X-risks. If you don’t wish, well, don’t forget that one of the main values of a capitalist-style economy is to determine how valuable any given thing is by finding out what price people are willing to pay for it.

  44. jacksprocket says

    There used to be quick HTML formatters here, but they’ve gone. The world improves, everyone’s fluent in markup now, hence comments above.

    “while after my own research, I’ve peg the likelihood of somebody cryo-preserved with today’s techniques ever being revived as on the order of three to five percent. ” wrote CPR, which reminded me of safety software guru Les Hatton’s take on risk assessments: for any action, the risk of destroying all humanity is tiny but positive. The consequences are infinite (for humanity). Therefore the risk associated with any action is infinite, so we shouldn’t do it. Now let’s get on with using science and common sense to write safer software.

    PZ might be interested in this from Hatton: it’s outside his area of expertise, and I’ve not the background to say whether it’s groundbreaking, commonplace, bollocks or plonk: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0125663

    Tell whoever runs the page to put the formatters back.

  45. jacksprocket says

    “If you don’t wish, well, don’t forget that one of the main values of a capitalist-style economy is to determine how valuable any given thing is by finding out what price people are willing to pay for it.”

    Look ast the British housing market, and live forever.

  46. savant says

    Hi there DataPacRat (and all!). AI researcher here. I used to read a lot of Less Wrong and was very moved by a lot of what was written. Used to be right where you are, seems like. Then I got an education in information science and the scientific process.

    The AI Overlord ideas that motivate MIRI et.al. are misguided. Their concept of a super-AI being built in someones’ basement are as likely as precambrian rabbits. I’d be happy to discuss why, but won’t do so at the moment- I don’t want to fill up our non-AI, Crustacean Overlord’s message board with it unless people want to hear it. Basically, though, it seems like the fear of a super-AI coming from someones’ basement emerges from the thought process of “super-diseases can be made in a basement lab, why not AI?” Which is obviously wrong.

    This whole “Evil AI Future” that needs to be fought with present money is exactly the same as the “Evil Polluted Future,” the “Evil Smashed-In-By-Asteroid Future,” the “Evil Nuclear-Holocaust Future“, and as many others as one can name. Each of these Evil Hypothetical Futures has odds, and each of them impacts the same numbers of potential future persons. That’s how causality works. However, the odds of Evil Polluted Future destroying civilization before Evil AI Future converts us all into paperclips is very high. Frankly, the odds of most Evil Hypothetical Futures are higher than the Evil AI Future.

    We have lots of other problems that are bigger threats and are being poorly addressed. Let’s worry about those things before we worry about the scary robot sky-daddy, mm?

  47. says

    savant wrote:

    We have lots of other problems that are bigger threats and are being poorly addressed. Let’s worry about those things before we worry about the scary robot sky-daddy, mm?

    As a silly commercial once said: “Why not both?”

    (I hope you all will pardon me, as it’s time for me to go offline for a bunch of hours.)

  48. Anton Mates says

    Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”

    Bostrom seems to believe that saving lives and making lives are equally good things–that we ought to save a child from dying not because its death will be unpleasant for it and traumatic for the people who care about it, but because a world with one additional human life in it is better than one without. If you don’t share that moral belief–and I don’t think most people do, even most utilitarians–then his expected value calculations are meaningless even if the math turns out to be right.

  49. savant says

    DataPacRat @ 54:

    Because Evil AI Future isn’t an isolated future, it’s part of an (effectively) infinite set of issues we could address that have poor, limited, or nil evidence. What about Evil California-Slides-Into-The-Sea-Megatsunami Future? Or Evil Galactic-Periodicity-Disaster Future? Or Evil Abrahamic-Sky-God-Returns Future? We can put odds on any and all of these, and Evil AI Future is in there.

    Meanwhile, we have massive evidence for Evil Pollution Future and Evil Asteroid Future staring us right in the face, with much greater odds.

    You’re elevating your theory out of this pseudo-infinite class of unlikely-but-still-possible Evil Futures, into the class of things we have massive evidence for. Why?

    Also, enjoy your offline time, either for sleep or work or whatever it is! I don’t mean any hostility in my comments. I’ve been a Less-Wrong reader, and still think there are some absolute gems in there – the ideas of treating rationality as a discipline are worth their weight in gold. But the God-AI Future just isn’t borne out by the realities of how information systems work. Developing a self-leveraging, self-editing, self-improving AI isn’t anywhere near as easy as they make it out to be.

  50. Ichthyic says

    It may not be an entirely insane approach if you consider marginal utility.

    spoken exactly like someone familiar with the term in the abstract.

    now go and figure out where a million dollars spent on ameliorating the effects of coastal flooding would, in fact, MAKE A FUCKING DIFFERENCE.

    fool.

  51. Holms says

    You may feel that preparing for a circumstance that’s only 5% likely to happen is useless; I don’t. You may feel that what I’m spending my money on has less than a 5% chance of working; I disagree.

    The post you cite earlier in the thread already has enough information to determine that your 5% chance is bunk, and the real answer is 0%. Did the fact that tissues are damaged a mere 20 microns deep into the brain not hint at this?

    And my earlier question was not answered. I did not ask you if you thought your reasoning amounted to Pascal’s Wager, I asked you how you’d respond to it.

  52. Ichthyic says

    As a silly commercial once said: “Why not both?”

    ah, so advertising sloagans speak your mind. figures.

    when people say things, for real, like “we can think and work on both things”, those typically are indeed BOTH, extant, REAL issues.

    this is not a real issue to divide one’s time or energy or money into.

    it is a wasteful distraction.

    I’m sorry you cannot see that, but you really need to.

  53. Jake Harban says

    Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”

    If your odds of winning the lottery were “a mere 1%” then spending all of your money on lottery tickets would be a guaranteed path to wealth.

  54. Ichthyic says

    Are these people for real?

    yes, they unfortunately are, and are an all too common result of complete isolation from the world around them.

    they think they are connected… because internet… but we are still animals; if you have not seen something with your own eyes, experienced it personally, it lets you drift into all sorts of denial and delusions.

    we are a people that have grown up entirely exposed to lies in the form of advertising; our exposures entirely limited to a very small world and worldview.

    it does not surprise me that someone can become a creationist, devoted to fighting against the paper tiger of “secularism”, nor does it surprise me to find people who want to fight against Asimov’s robotic doom.

    it’s a fundamental disconnect, that our privilege affords us to indulge in.

  55. Ichthyic says

    If your odds of winning the lottery were “a mere 1%” then spending all of your money on lottery tickets would be a guaranteed path to wealth.

    even with the real odds, it’s a guaranteed path to wealth.
    …just not to a guarantee to any one specific person.

    kinda what keeps it going though; the fact that indeed, someone is guaranteed to become instantly wealthy….

  56. Anton Mates says

    The common response I got to this was, “Yes, sure, but even if there’s a very, very, very small likelihood of us decreasing AI risk, that still trumps global poverty, because infinitesimally increasing the odds that 10^52 people in the future exist saves way more lives than poverty reduction ever could.”

    Why do the X-risk folks even want to decrease AI risk? If these hypothetical AIs are smarter, tougher and/or more compact than humans, then they can probably populate the universe more quickly and densely than we can, which means that the hypothetical future can accommodate a zillion times more of them than us. Even if we like humans more than AIs, shouldn’t we be willing to sacrifice a mere 10^52 future human lives for the sake of *waves hands around mathily* 10^75 future AI lives? We should just let them replace us!

    Or is there some additional rule that the life of a hypothetical distant human descendant a billion-years in the future is infinity times more valuable than the life of a hypothetical distant AI descendant?

  57. blf says

    it’s a fundamental disconnect, that our privilege affords us to indulge in

    Indeed. May I suggest another part of the problem is, to use the old adage, “if all you have is a hammer, every problem is a nail”? To wit: The individuals worrying about this nonsense are all(? mostly?) in the computing industry. So am I(at least by profession if not education). To some of such people everything looks like a computer(usually digital) or computation / algorithm(usually Turing-complete), limited by such things as energy(power), available input data, “complexity”, available funding, time, and other such factors, frequently(?) seen as mundane. Hence, with sufficient blah blah blah, including money and time, it(whatever “it” is) can be “solved”.

  58. blf says

    *waves hands around mathily*

    video please

    Next yer going to say you don’t believe in “Proof by repeating repetition”. </snark>

  59. says

    I’ve peg the likelihood of somebody cryo-preserved with today’s techniques ever being revived as on the order of three to five percent.

    Oh, really? And this research perhaps involved looking at all the damage done by cryopreservation, which sets the likelihood at 0%, and then added a fudge factor of 3-5% with the I-wish-it-were-so parameter?

  60. zibble says

    @16 tbtabby
    Deep down, it’s because they think humans have souls and robots don’t. Which is ironic, considering how fucking “rational” they consider thenselves.

    You notice countries that don’t believe in souls (like Japan) see robots as sympathetic children, not merciless killing machines.

  61. brett says

    So if I approach one of these Bostrom*-ite Effective Altruists on the street, and say that unless they give me $10,000 cash and a hot towel all of humanity will be rendered extinct by spider-sized AI in 2500, they’re obligated to give it to me, right? After all, if there’s even 0.00000000001% chance that not doing so will lead to the End of Future Humanity, then they’re obligated to pay up.

    Ugh. The sad thing is that when you get away from that narcissistic, AI-obsessed Silicon Valley garbage, Effective Altruism is a good idea. It’s about trying to find the most cost-effective way to do charity, in a way that most helps the people receiving it rather than just making the people giving charitable aid feel good (or whatever happens to be trendy at the time). That’s why it led to things like “just give poor people some money” versus old shoes and t-shirts, etc.

    By the way, I thought your response was brilliant. Sure, future humanity matters – but if you consider that present humanity is a necessary condition for future humanity to exist, then existing humanity is vastly more important than far future humans, and should be weighted as such.

    * Double “ugh” and unsurprising that Bostrom came up. He’s done a ton of this stuff – one of the things he last got notoriety for was in publicly hoping that spacecraft found no evidence of life on Mars, because finding life on Mars would make it more likely that intelligent life was out there, and that said intelligent life might be tempted to go full Killing Star on us.

  62. says

    Brett @ 71:

    Sure, future humanity matters – but if you consider that present humanity is a necessary condition for future humanity to exist, then existing humanity is vastly more important than far future humans, and should be weighted as such.

    The only way to secure the future is to pay attention to the present. Right now, we have a lot of people ignoring the present and its myriad problems, which is bad enough. No one should decide to be utterly consumed by non-existent issues which might happen in some future somewhere.

  63. neverjaunty says

    may not be the worst use of that million dollars

    So an action is sensible and justifiable if it ‘may not be the worst’ under the circumstances? Truly, DPR, that’s your justification for throwing vast sums of money at nonsense: that there are even stupider ways to spend that money? By this standard, giving money to Ken Ham or to support research on the Hollow Earth is pretty much okay and certainly shouldn’t be criticized, since at least that money isn’t going to, say, Stormfront.

  64. Ichthyic says

    Next yer going to say you don’t believe in “Proof by repeating repetition”.

    No, I just want to see if it looked like

    *jazz hands*

  65. Ichthyic says

    then added a fudge factor of 3-5% with the I-wish-it-were-so parameter?

    does that mean there is a 3-5% error margin on the IWIWS parameter… or that the IWIWS parameter is the measure of the error margin itself?

    I’d love to see this all fleshed out. on a chalkboard. to mix a metaphor.

  66. blf says

    Re@76: None of my math professors — or at least none of their “proof by hand-waving” — were that co-ordinated, and often (or so it seemed at the time (as I now recall)) made less sense. Some, usually, whilst mumbling and standing in front of the chalkboard so you couldn’t see the illegible handwriting, also immediately erased the chalkboard to maximize the incomprehensibly. Also, background music was notably missing (it would have presumably helped staying awake at times!).

  67. Ichthyic says

    standing in front of the chalkboard so you couldn’t see the illegible handwriting, also immediately erased the chalkboard to maximize the incomprehensibly

    Brings back memories. My advanced matrix alegbra instructor was just like that. he even APOLOGIZED ahead of time, on the very first day of class, and suggested that anything we missed we could just pick up from the textbook anyway….

    I got a lot of sleep in that class.

  68. blf says

    Re@78: Ah, but if he didn’t also mumble (and basically speak only whilst facing the chalkboard), you didn’t “enjoy” the full effect… I had one, my (elderly) Field Prize winner Group Theory instructor, whom all that is based on: Mumbling whilst facing and immediately erasing the illegible handwriting on the chalkboard. He would literally erase the board with his left hand whilst stabbing at it with an abused piece of chalk in his right hand, facing the board and mumbling, whilst his body covered most of whatever it was he wrote. I don’t recall him apologizing, albeit I was warned ahead of time by previously-tormented students…

    (Some other professors had some of the traits, but this guy literally had them all…)

  69. komarov says

    Re: #76

    Good grief, that chap looks disgustingly cheerful and a little creepy at the same time. In fact, that broad, fixed grin reminded me of the Terminator practicing his smile. Coincidence? I think not! The Jazz hands are probably a diversion, a trick to make you think he’s unarmed.

  70. unclefrogy says

    Fuck!!!

    Are these people for real?
    yes, they unfortunately are, and are an all too common result of complete isolation from the world around them.
    they think they are connected… because internet… but we are still animals; if you have not seen something with your own eyes, experienced it personally, it lets you drift into all sorts of denial and delusions.
    we are a people that have grown up entirely exposed to lies in the form of advertising; our exposures entirely limited to a very small world and worldview.
    it does not surprise me that someone can become a creationist, devoted to fighting against the paper tiger of “secularism”, nor does it surprise me to find people who want to fight against Asimov’s robotic doom.
    it’s a fundamental disconnect, that our privilege affords us to indulge in.

  71. says

    The particular problem is the usual ones in arguing with people from the Effective Altruism subculture:

    1. equivocation between “Effective Altruism” the brand name, and altruism that is effective. Whatever that actually means at object level. Objections to the first are answered with a call to the second.

    (I mean, who wouldn’t want their altruism to be effective! Whatever that actually means at object level. As apple-pie brand names go, it’s a big winner over the same people’s previous one “rationality”, which outsiders early on correctly ascertained meant annoying and intrusive nerds who behaved as if they’d never met an actual person while talking about their love of hypothetical ones.)

    2. equivocation between EA aspirations and EA reality: nirvana fallacy as rhetorical tactic. (We aspire to spectacularly effective interventions! let us list them here for you! You evil person, not giving us a complete pass for our aspirations! And never mind e.g. our dismal failure to laugh MIRI out of the movement.) A whacking dose of No True Scotsman.

    On contemplation, I think my true objection is that I have no faith that people with such a remarkable array of terrible ideas are going to somehow come up with good ones this time around, or not smuggle in their terrible ideas with any good ones they accidentally have. It’s certainly philosophically possible and worth examining. But.

    The Dylan Matthews article is a useful rant from an insider, detailing many of the bad ideas on the ground. “I really do believe that effective altruism could be the last social movement we ever need.“ OH FUCKING REALLY. Remember that Dylan Matthews is one of the ones who wants his altruism to be effective, and correctly recognises that the Make A Wish foundation is more effective on utilitarian grounds than MIRI: it has aims that are possible and that it achieves routinely.

    I will heartily concur that examining charities you donate to is an extremely good idea. Peter Singer’s basic idea – that as first worlders we are stupidly rich and have a moral obligation to use it for the general welfare – is sound. (even if he completely fails to understand that motivation and ability to do the level 0 actual running of a charity isn’t fungible and can’t necessarily just be bought in.) It’s a better idea for rich people to donate to charity instead of buying yachts. GiveWell are actually pretty good as a charity evaluator IMO. for what that’s worth. Altruism that is effective is excellent. The EA crowd are actually very much into mosquito nets.

    But when you have “race realists” (sorry, “human biodiversity” advocates) that are loud EAs, or Gamergaters that are loud EAs, or people who think MIRI isn’t a rabbit hole that are loud EAs, I have shockingly little hope that if these people were effective the results would be good at all.

    So don’t ever let Effective Altruism subculture members respond with general claims about altruism-that-is-effective, when they’re really talking about giving money to MIRI. Money that MIRI literally wants to take from the same pool that would be going to e.g. mosquito nets.

  72. Ichthyic says

    (Some other professors had some of the traits, but this guy literally had them all…)

    probably brought in a lot of grant money to his department though.

  73. Maya says

    Sadly, they are for real, and they reinvented Abrahamic religion to justify their focus on AI. See Roko’s Basilisk, the idea that if you don’t everything in your power to bring about friendly AI will result in the AI torturing an infinite number of your rescue simulations for all eternity, where “everything” for most people is defined as donating to the Singularity Institute Machine Intelligence Research Institute. While they now claim that they never believed in Roko’s Basilisk, they still practice Singularitarian millennialism.

    Part of the stock answer they give to “why don’t we do X instead of Friendly AI” is that Friendly AI will fix all problems for us, so we don’t need to worry about X if we rush to create a Friendly AI. The second part is that if “we do X instead of Friendly AI”, then someone will create the inevitable paperclip maximizer or worse, because they believe that the creation of super-intelligence AI is imminent. It is a lot like arguing with Christian dispensational premillennialists about climate change or pollution mitigation.

    The LessWrongians are a creepy bunch and I had a strange recruiting moment with them around 2009. (They called themselves Bayesian Rationalists at the time.) I was approached shortly before “Less Wrong” was a thing, being directed to Yudkowsky’s Bayes Theorem posts and “Overcoming Bias”, as well as a few other things. The whole experience was creepy and strange, especially since the Anonymous vs Scientology stuff was happening at the same time, and there was just an general awareness of cultish behavior in the media.

    They had a bunch of arguments against traditional charities then, but they mostly boiled down to the ‘charity about X is really about sending signal Y; if you were “rational” about X, you would do Z instead’, mixed with the occasional just-so story.

    While typing this up, I skimmed back through the “Overcoming Bias” charity tag. I had forgotten about the MRA stuff and “poor people do smile” (a post about post-Singularity poverty) and effective charity is donating to cryonics organizations.

  74. says

    It’s not an imaginary problem, PZ.
    It’s an imaginary solution! Come on, name a single problem that wouldn’t be solved with the removal of all humans and subsequent rise of machine intelligence.

  75. Ichthyic says

    name a single problem that wouldn’t be solved with the removal of all humans and subsequent rise of machine intelligence.

    peak oil.

  76. unclefrogy says

    double fuck! I am using a new browser and I clicked post instead of preview sorry then my connection crashed!!!

    we will still be humans in 50 million years? how the fuck does anyone know that?
    that gigantic number of people is what actually? there is no way that the carrying capacity of the earth at any given time is going to be that big so if that is the total number of people that will have lived until then plus those who are then still alive it means next to nothing as most of that number will have already lived and died including all of us who are alive now!

    If the idea is to insure that humans will survive in perpetuity then we should do all we can to insure that the carrying capacity of the earth is insured as much as possible. That would suggest that we put a major effort into preserving the environment. I would think that everything we do should have at the least effect of preserving what biodiversity we currently enjoy if not increasing it. So that even if at some magical time when the AI takes over there will be spaces , niches, for humans to flourish unmolested.
    Of course if there is a melding of humanity and machine a real upgrade There will be no need for a biosphere or an oxygen rich atmosphere at all
    in fact the machines might last much longer without free oxygen.
    I also do not understand the needs of these “machines”? To survive OK and expand maybe but to multiply?
    uncle frogy

  77. A. Noyd says

    Anton Mates (#55)

    Bostrom seems to believe that saving lives and making lives are equally good things–that we ought to save a child from dying not because its death will be unpleasant for it and traumatic for the people who care about it, but because a world with one additional human life in it is better than one without. If you don’t share that moral belief […] then his expected value calculations are meaningless […].

    Exactly this. These wankers remind me a lot of forced-birthers. Like, they both seem to take it for granted that basic human existence is necessarily superior to nonexistence. And they both care more about the yet-to-exist rather than the already-existent, probably because the yet-to-exist cannot disappoint them morally.

  78. slithey tove (twas brillig (stevem)) says

    unclefroggy wrote @88:

    we will still be humans in 50 million years? how the fuck does anyone know that?

    good question. my suspicion is that our medical abilities will expand our genetic diversity way beyond “natural forces” would normally allow. I suspect that we as a species will become so genetically diverse and “racially” blended that trying to categorize us into a species of races will be overwhelming. Leading to a new definition of the word species. At that point it would be hard to maintain the label of Homo sapiens, except as an historical artifact.
    It seems I’m rambling with half understood concepts, even so, the concept can be summed up as “drift”, so I too doubt 50 megayears will leave us “humans”, we’ll have to adopt ourselves simply as “people”, leaving the H word as an artifact.
    I.M.O
    F.W.I.W

  79. says

    There are good reasons to think it’s an overblown concern, and that it is an unpredictable issue that is unlikely to develop in an expected direction (which does not mean it’s safe, of course, but that building contingencies now to deal with situations that don’t exist and are likely to be completely different than you anticipate is really stupid). You could be building a Maginot line against one kind of threat and then discover that the AIs don’t really respect Belgian independence after all.

    We disagree about how important a concern this is, but we’d probably need to have a longer and more detailed argument to sort out exactly where we’re diverging. There are plenty of AI researchers on both sides of a lot of these questions, so it’s definitely not a settled debate.

    Regarding ‘it’s impossible to make serious progress on this problem before AI is further along’: I think this is a reasonable objection, and any safety research focused on a notional future technology needs to make a strong case for why they’re likely to be able to make any difference at all. The historical track record for people trying to predict and prepare for technologies decades in advance is not good — admittedly, partly because it’s rare for people to even try. Some of the few examples I know of in CS include covert channel communication and quantum cryptography (https://intelligence.org/2014/04/12/jonathan-millen/, https://intelligence.org/2014/06/23/roger-schell/). (Though a skeptic might respond that we also don’t know how quantum computing and quantum cryptography research will pan out.)

    There are a few reasons people are starting to work on these problems. One is that there’s sometimes overlap between safety work that would be useful for futuristic smarter-than-human AI systems, and safety work that will be useful for present-day or near-future AI (http://ww2.kqed.org/news/2015/10/27/stuart-russell-on-a-i-and-how-moral-philosophy-will-be-big-business). A second is that this looks like an extremely hard problem, so it may take decades of capacity-building, strategic research, and clarification of the basic ideas involved just to put ourselves in a position where we’re able to address the problem later (https://80000hours.org/2015/12/even-if-we-cant-lower-catastrophic-risks-now-we-should-do-something-now-so-we-can-do-more-later/). A third reason is that there appear to be some very generally applicable theoretical lines of research we can work on today that would help us better analyze future generations of AI systems, whatever form they end up taking; and civilization has a reasonably good track record of making early progress in mathematics and theoretical computer science (https://intelligence.org/2015/07/27/miris-approach/). Those kinds of considerations led into the creation of the robust and beneficial AI research priorities report following up on the January Puerto Rico conference, which includes plausible areas for further study both on the near-term problems and the long-term ones (http://futureoflife.org/data/documents/research_priorities.pdf).

    So, overall, I’d say this is an important problem to work on. Like Dylan Matthews, I think we shouldn’t work on it to the exclusion of helping people who are in need today. If ordinary anti-poverty, animal-welfare, etc. effective altruists can work together with the various groups interested in basic research and emerging technologies, then they should view each other as allies rather than as competitors. If you think of activism, philanthropy, and research as zero-sum games, you’re going to get exhausted endlessly fighting everyone whose priorities are slightly different from your own, when there are plenty of opportunities to fight together.

    I don’t think this problem is important because ‘it’s astronomically unlikely, but the stakes are so high that doesn’t matter.’ I think the AI problem is important because it is likely, at least if we don’t do much about it. (Luckily, a lot of people are in the process of getting together to do things about it!) But I made these points already on https://intelligence.org/2015/08/28/ai-and-effective-altruism/, so I won’t rehash the whole thing.

  80. Ichthyic says

    this is an important problem to work on

    depends entirely on your definition of “this”… which is still very vague, even in your missive, let alone the things the AI phobics are whinging about.

    then, to top it off, you agree that there is a reasonable objection in that there is insufficient information to even suggest legitimate problems…

    then go on to talk as if there already ARE legitimate problems.

    sorry… you seem very confused from my perspective.

  81. says

    Ichthyic, the way I usually characterize the problem is to focus on the development of systems that can form detailed, accurate models of the world, and can efficiently search through the space of policies that are likely to produce a given outcome according to the model. This can be a recommender system that tells other agents what policies to adopt, or it can execute the policies itself.

    Stuart Russell (co-author of the leading undergraduate textbook in AI) characterizes the problem a little more narrowly, in terms of decision-making agents (http://edge.org/conversation/the-myth-of-ai#26015):

    The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

    1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.

    2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

    A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.

    This is still informal — but if we knew how to formalize the idea of smarter-than-human software, we’d already know how to write that software. There’s always going to some degree of imprecision in describing problems with technologies that may be decades away.

  82. Anton Mates says

    slithey tove @90,

    my suspicion is that our medical abilities will expand our genetic diversity way beyond “natural forces” would normally allow.

    And “natural forces” alone produced the entire clade of humans, other apes, and monkeys from a single primate lineage within the last 50 million years. If our descendants survive the next 50 million, there’s absolutely no way they’re still going to be Homo sapiens, even without any deliberate genetic engineering. They’ll probably have about as much in common with us as a marmoset does.

    (I mean, marmosets are cool. I just don’t think it’s that productive to weigh the life of a far-future space marmoset against the life of a far-future AI, or the life of a far-future ant swarm infested with sentient fungus, or whatever the hell we’re dreaming up today.)

  83. Ichthyic says

    There’s always going to some degree of imprecision in describing problems with technologies that may be decades away.

    sorry, but all you keep doing is reinforcing that this is not actually a problem that needs solving.

    you really can’t see this?

    wow.

  84. says

    Ichthyic: In general, even the best policies will have at least some evidence against them, and even the worst policies will have at least some evidence in their favor. Weighing up the evidence on both sides of a disagreement and coming to a good conclusion requires acknowledging when the other side’s view has merit, even if I think my own side’s view has more merit.

    AI researchers working on this issue recognize it’s a really hard issue to study. The ones who study the issue anyway do so because they think our current uncertainty can be at least somewhat reduced, and that this is an important area to learn more about. It would be nice if we had more certainty, but we shouldn’t avoid investigating a domain just because we’re currently ignorant about it.

    I find it very telling that those people apparently think that “more intelligent and more powerful” = “evil”.

    The argument isn’t “more intelligent and more powerful” = “evil”, but rather “it’s easier to build something amoral than something moral” plus “more intelligent and more powerful and amoral = dangerous”. Russell’s way of putting this in the quote above is: “1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down. 2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.”

  85. dianne says

    I am currently very worried about the problem of giant Minnesota mosquitos arming themselves with machine guns.

    Giving Minnesota mosquitos machine guns seems…redundant. Actually, giant Minnesota mosquitos with machine guns seems a bit less scary than the current situation involving millions of tiny, hard to hit, Minnesota mosquitos armed with skin piercing bio-needles and FSM knows what diseases. Bring on the giant mosquitos!

  86. dianne says

    “it’s easier to build something amoral than something moral”

    Is it? I mean, is it really easier to build something intelligent and amoral than something intelligent and moral? As far as I know, every intelligent being created by humanity thus far has been moral, in some sense, though the morality of some of them has been pretty off. I would hypothesize that morality is something that just kind of happens after a certain level of complexity of thought and that it would be hard to stop it from happening if one built a sufficiently advanced AI. In which case, this whole argument is not only invalid, but potentially ignoring the actual “threat” of poorly socialized AIs. (Admittedly, also a strictly imaginary risk and probably one not much more likely than the giant machine gun toting mosquito problem.)

  87. John Morales says

    Rob Bensinger:

    1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.
    2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

    Bah.

    Those contentions presuppose values and purpose. Not all intelligences need either.

    (To the first, leaving aside that it’s a fallacy of composition, de gustibus non est disputandum; to the second, teleology is for constructs)

  88. says

    This presumes that MIRI’s work – what little of it there is – doesn’t make actual real-life professional AI researchers want to punch walls. As it turns out, it does.

    Note the huge list of scientists they list as being onside? Almost none are working AI researchers.

    Also. remember that MIRI literally wants to take the mosquito net money. That’s worth repeating.

  89. Anton Mates says

    Rob Bensinger,

    plus “more intelligent and more powerful and amoral = dangerous”.

    I don’t really see why. Humans are moral, but that hasn’t stopped us from perpetrating wars, genocides, slavery, etc., as well as being one of the most environmentally destructive forces in the history of the planet. Often our moral feelings have encouraged us to do these things. I’m not sure that morality would be particularly helpful in preventing an AI from going down the same route.

    2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

    But humans are intelligent, and they don’t do this. I mean, we’d vaguely like to live as long as possible and be as smart and powerful as possible, but we don’t commit nearly as much effort to these goals as we could. By and large, if we feel like sitting on the couch and eating a pizza tonight, that’s what we do.

    Internally, we have no supremely important assigned task; we just have a big bundle of drives and preferences that are constantly fluctuating in relative significance. Presumably an AI could operate in a similar fashion.

  90. Amphiox says

    we will still be humans in 50 million years? how the fuck does anyone know that?

    Cladistically, they’ll either be human, or they’ll be extinct.

  91. EnlightenmentLiberal says

    The number of future humans who will never exist if humans go extinct

    I give no fucks, and neither should anyone else. The non-existence of some potential person in the future is a bullshit concern. It’s bullshit here, just like it’s bullshit concerning the topic of abortion and birth control. I don’t care if humanity goes extinct. I might care if the last generation dies horribly, such as from a meteor impact, but I give no fucks about the “person” who wasn’t never born.

  92. cubist says

    sez David Gerard @103: “Also. remember that MIRI literally wants to take the mosquito net money. That’s worth repeating.”
    Hold it. What, exactly, are you accusing the MIRI folks of being guilty of when you say they want to “take” money away from mosquito netting? I’d been under the impression that MIRI is only tryna persuade people that the stuff they want to do is worth giving money to; if that is what you mean when you say MIRI “literally wants to take” money from mosquito netting, well, it would make just as much sense to say that any other charitable cause whatsoever which engages in “hey, here’s why you should give money to us” behavior is a cause which “literally wants to take” money away from mosquito netting.

    Of course, if that actually was what you meant by “take”, that would be a transparently silly abuse of the word “take”. So you must have meant something else, and it’d be nice if you could unpack your meaning. I repeat: What, exactly, are you accusing the MIRI folks of being guilty of when you say they want to “take” money away from mosquito netting?

  93. A. Noyd says

    cubist (#107)

    if that is what you mean when you say MIRI “literally wants to take” money from mosquito netting, well, it would make just as much sense to say that any other charitable cause whatsoever which engages in “hey, here’s why you should give money to us” behavior is a cause which “literally wants to take” money away from mosquito netting.

    The mosquito netting best fulfills the MIRI people’s own criteria of “effective altruism” when measured by actual, real-world results. But these dimwits are deluding themselves into worrying about fantasy scenarios and the well-being of abstractions. That delusion is causing them to divert their money to a nonsensical cause when, without that delusion, they would give their money to mosquito nets. And they want other people to believe in both “effective altruism” and divert their money toward their delusion on the basis of that “effective altruism.” So yes, they literally want to take money from mosquito netting.

    Also, most other charitable causes do not piss away a huge share of their contributions on frivolity. The ones that do, like Komen for the Cure, we do accuse of literally wanting to take money away from other charities. MIRI is just extreme in that it pisses away 100% of its contribution given that its cause is imaginary. One might as well set one’s money on fire.

  94. says

    The application of Google (and friends) to existential problems of robotic intelligence strikes me as being not dissimilar to the application of Listerine to the “problem” of chronic halitosis (or, for that matter, Christianity to the “problem” of sin): It is a solution for a problem manufactured by the existence of the solution.