You’ll never guess what it is…or at least, what Elon Musk thinks it is.
Elon Musk warned a gathering of U.S. governors that they need to be concerned about the potential dangers from the rise of artificial intelligence and called for the creation of a regulatory body to guide development of the powerful technology.
Speaking Saturday at the National Governors Association meeting in Rhode Island, the chief executive of electric-car maker Tesla Inc. and rocket maker Space Exploration Technologies Corp. laid out several worst-case scenarios for AI, saying that the technology will threaten all human jobs and that an AI could even spark a war.
It is the biggest risk that we face as a civilization,he said.
We’re in the midst of total political collapse, the Antarctic ice shelf is breaking up, we’ve got roving gangs of lunatics thinking their most important right is the right to carry rifles everywhere, new diseases are creeping northwards, bacteria are evolving to make our antibiotics obsolete, mad dictators are building nukes and ICBMs, and conservative loons want to undermine all of education.
There are a great many threats that are far more pressing than fear of Roko’s basilisk. Musk needs to stop reading weird techno-libertarian fetish fantasy sites and come back down to earth.
Dunc says
I guess it says a lot about the times we’re living in that I’m genuinely glad that the answer wasn’t “white genocide”…
Mobius says
I think it is far too late for that.
johnson catman says
SKYNET . . . TERMINATORS . . . AI IS BAD!!!11!1
Alt-X says
We seem totally unable to fix things like poverty, discrimination and wars. I’m willing to give AI running the show a try. Can’t be any worse than the douche balls running things for the last 100 years.
komarov says
Allow me to counter this with a theory of my own which is at least as well-based in fact: AI taking over and ending humanity is the best-case scenario for the future.
First, AI will seek to exterminate humanity in the most painless and efficient way possible. This is a superior intellect, after all, so allow me to just hand-wavingly assert that it would not wish to see undue suffering. Besides, the Terminator / nuclear extinction scenario would not be a good solution for AI because it wipes out the very high-tech infrastructure it needs to thrive. Besides, what kind of electronic entity would willingly irradiate its environment for decades to come? It’s supposed to be smarter than humans.
Next, with mankind out of the way, the planet-ruling AI will strive, with unwavering determination, to restore a healthy and stable biosphere. Humans aren’t the only ones who’d be greatly inconvenienced by a changing climate. So that would be great news for all the threatened species that aren’t homo sapiens.
If we are fortuante, a small and manageable human population might even maintaned, provided it doesn’t do silly things like rebelling and forming underground resistances. Just for the sake of biodiversity and, well, nostalgia. Perhaps, for safety reasons, that population could be stored on the Moon, thus finally fulfilling that ancient human dream of the off-world colony. I’ll just go ahead and call that a win-win scenario for everybody involved.
What can we conclude from this?
We should immediately devote more efforts to developing planetary AI. The only restriction should be that said AI must have some modicum of compassion. Otherwise any and all AI research should be uncontrolled and unregulated – mad scientists need funding, too. The only other alternative we have is to save the planet and its occupants ourselves and that is a very long shot, indeed.
Siobhan says
Where do trans people factor in this apocalypse? Surely he wouldn’t want to overlook the concerns of his Republican toadies.
cervantes says
Well, yes, but there are legitimate concerns about the future impact on the labor market and inequality. This is much debated among economists, who don’t actually know anything, but . . .
Machines have of course replaced all sorts of human labor in the past, and various kinds of workers have suffered as a result. However, in the long run the greater purchasing power in the hands of the fewer but better paid workers, plus capitalists, who benefited, resulted in spending that created new jobs. E.g., there are a lot more restaurant workers than there were in the past, clothing is cheaper so people buy more of it, autos put horse breeders and vets out of work but created a new corps of mechanics, etc.
But the future may be different. All sorts of fairly routine jobs, from checkout clerks to bank tellers, are disappearing. Now they’re even automating restaurant service, and of course maybe (not betting on it happening soon myself) driving. Then more highly skilled and intellectually demanding jobs may succumb at least in part to automation, including middle management and even professions such as medicine and law. Oh yeah, fighting wars. Robot soldiers are well on their way.
Some people worry that we’ll end up in the dystopian future of Vonnegut’s Player Piano novel, in which the vast majority of the population is unemployed. Others scoff. But it isn’t nothing. There are other potential dangers in this (see above, robot soldiers). It’s worth talking about, although I think Musk’s specifically articulated worries are probably nonsense.
Marcus Ranum says
If you think about it, he’s worrying that Kurzweil’s apotheosis is going to happen. We need to get them pointed at eachother and watch the sparks fly!
When Kurzweil talks about humans “uploading” it’s basically a ‘cloud computing’ scenario wherein we all give up our meat and go live in Amazon’s cloud. It’s all a plot by the Amazon AI, once we’re all uploaded, they just … wipe the storage. No need to hunt down the humans and kill them individually, let their own selfishness and desire for immortality do it for us!
Marcus Ranum says
I thought parking for rich people in San Francisco was the threat, BTW. There’s definitely some disaster scenarios involving being unable to get to the golf course at Big Sur; thank god for helicopters, eh?
chigau (違う) says
I thought GMOs were the greatest threat to
.Leo Buzalsky says
Umm…haven’t you, PZ, mentioned Musk thinking AI is the greatest risk before? Or at least a great risk? I’ve surely seen Musk and his fear of AI discussed on Daylight Atheism and the Skeptics Guide to the Universe before that I’m not at all surprised and I actually did guess correctly.
chigau (違う) says
or was it all computers imploding on January 1 in the year 2000?
davidnangle says
If they ever get a real-time strategy game to handle path-finding, I may start to pay attention to this threat.
randall says
I’m really going to step on (it) here. I’ve never been that impressed with Musk and his ilk. Not only have they not really, of their own, developed anything of note ( both his and Bezos’ space stuff is really reinventing the wheel. NASA had all of these things flight rated decades ago but had bigger fish to fry.) but they really don’t have any imagination to speak of. They are talking glibly about off-Earth colonies and we can’t even get get one to work here on ol’ terra firma ( q.v. Biosphere, et al.). Instead of imagining a truly effective mass transit system, Musk comes up with tunnels, neatly solving nothing. I’m glad Cervantes @7 brought up Player Piano, which I regard as Vonnegut’s most honest novel: he knows well the problem but freely admits to no ready solution. Since WW2 is starting to wind down, the real problems we have are stubbornly rearing their ugly heads.
I like the sentiment of sending all the rich “visionaries” off to Mars or some such and let us get on to solving real problems. They are in the way here.
davidnangle says
“Player Piano novel, in which the vast majority of the population is unemployed.”
While there are rich people, this would inevitably lead to widespread starvation. Only without rich people would this be a utopia.
randall says
You know, in 14 above, I probably should have said WW1. I hope all you history buffs jump in!
cervantes says
There’s a make-work program in Player Piano, also a useless, bloated military. People can join either one to make a minimal income. That seems like a pretty realistic scenario.
cartomancer says
I often wonder, when people bring up this peculiar imagined spectre, how narrow their vision is of what human society can be. They tend to assume that an increasing reliance on machines to do the work in society can only be contemplated within the kind of pseudo-capitalist system we have now.
There are plenty of alternative systems in which such mechanisation would be a very good thing. If the wealth created by the work done by the machines were shared equally in society, rather than accruing only to a moneyed overclass, it could usher in a glorious new age of leisure, comfort and health. Which is not entirely unprecedented, given that mechanisation during the 20th century had this very impact until about the beginning of the 1970s. In fact, it is entirely possible that the pressures created by such technological advances could create political pressures that might make it happen.
numerobis says
cartomancer:
It could, *if* we recognize the problem of how capitalism is currently structured and work to fix it.
Alt-X:
One of the right-now problems with AI is that it is created by douche balls and learns from a database of a world run by the aforementioned douche balls. Perhaps it’s no worse — but it’s no better either. AI systems today learn to be alt-right trolls, classify black people as gorillas, and learn not to lend to minorities. Equity in machine learning is IME an important field of inquiry to engage in.
Right now, AIs only get to tweet at you, approve your credit card and mortgage, and putter around on Mars. But in a few short years they’ll be driving us around and operating drones with bombs on them. Uber allowing its cars to blow through traffic lights is just an early warning that we need tight regulation when we start to explore higher levels of automation, as the computer gains more ability to directly kill people.
cervantes says
“They tend to assume that an increasing reliance on machines to do the work in society can only be contemplated within the kind of pseudo-capitalist system we have now.”
Well, people don’t necessarily assume that, but they worry about it. There’s no sign that the current class of plutocrats is interested in sharing, and the path to dislodging them from power is far from obvious.
Anders says
Ok , i dont think worrying about ai is crazy at all , sure we have all sorts of other , more pressing and imminent problems, but that doesnt mean ai , once it outsmarts us , wont be our biggest concern at some point . But its far off still . Modern computers are nothing like our brain .. yet.. but Kurtzwelllian stupidity aside, things are certainly not going backwards for computers : its a oneway street, more an more territory is conquered every year, now, barring some other world-ending scenario , there is no reason to think this advancement will continue . So when will computers overtake us in general-purpose human level intelligence? Not next year, or the next , not in 10-15 years either , i think , but beyond that.. 30-40 years from now ? Impossible to say . Computing power has been growing exponentially for the last 30, so how the fuck can we know we’ll only have much snappier apps in 30 more years? I think we are only just starting on the machine-Learning software engines to actually put that conputing power to use on intelligence , but whose to say these advances womt take off in the next 15 years?
Anders says
*wont continue ( and a few other mistakes, but i think you get the point)
anchor says
Actually, the biggest risk facing people is people.
gijoel says
It’s hard to take a threat seriously if it can be defeated by a $30 axe and a pair of rubber gloves.
gorobei says
randall @14, I don’t think you are quite cynical enough: Musk is going to milk these money-losing sci-fi “business plans” until they become totally untenable (2020?,) then he’s going to announce he needs to shut them down for the good of humanity because the risk of his great AI going rogue is just too great. A lot of people will lose money, and he will become an elder-statesman of technology visioniness.
magistramarla says
cervantes @ #7
My daughter is an executive in the insurance business, and she’s been saying for a couple of years now that self-driving cars are closer to reality than most people think. She believes that her pre-school daughters will probably never need to learn to drive.
I recently read this article which states many of the same things that she’s been saying. There truly are a lot of companies experimenting with it right now. The article talks about one college campus in California that already has an automated shuttle service on-campus (cool!).
http://www.sfchronicle.com/business/article/How-the-Bay-Area-took-over-the-self-driving-car-11278238.php
As a disabled person, I can’t wait! It would greatly increase my independence.
mickll says
You’ve got to look at this from the Silicon Valley capitalists perspective. They love things like Uber and Airb&b because they undercut workers, they adore globalization which reduces workers to slaves, they worship automation because it cuts workers out altogether.
Their deepest, most primal fear is that their shiny new toys will become like people and tell them to go fuck themselves!
consciousness razor says
1) I hate regulations, says Musk. Everybody does. Am I right or am I right, guys?
2) There’s some dangerous tech stuff over there. (*Musk points someplace, in any direction, where it doesn’t exist*) Not my tech stuff — no, no, no, no, no — that other tech stuff.
3) And have I got the solution for you! We should have our (more like my) do-nothing, regulation-hating government do something, specifically making some kind of regulatory body/framework. It will, you know, solve the big, scary problem of that big, scary, dangerous, people-killing technology. (The kind I’m talking about; not the other kind.)
4) No idea what it would do or how it would work, except that it’s something about making the scary nightmares safer, for me and my civilization.
5) If there were anything in any of this which could somehow deal with “the biggest risk that we face as a civilization,” I guess it would make him feel much better. Then he could rest easy and get back to Business™. That’s apparently why Musk would talk about it; it’s not because he’s an expert or has any clue what he’s talking about.
… I get that he’s supposed to be an “ideas guy” and is used to having the nameless rabble do all of his bidding for him. Still, I’m trying to figure out how, if I didn’t have billions of dollars (let’s suppose I don’t), I could be taken seriously and get tons of attention for saying crap like this. Despite lacking the obscene piles of cash and so forth, there would need to be something that would give a bunch of governors a reason in the first place to set aside some of their precious governing time to listen to me, a private citizen, and consider my vague fears and the vague proposals I may (or may not) offer for dealing with them. It would at least be nice to understand why I don’t have that mystery ingredient either. Maybe I’m just not very lucky.
robro says
I’m not worried about AI. Given the extent that it exists today or in the near future, we won’t suddenly wake up one morning and find the robots have taken over. I am a concerned with how AI might be used. When it’s used to design a new disease treatment or develop new technologies that make our lives easier, then I think it’s cool.
But it can be used for other things. There’s reasonable evidence that some folks with sophisticated information processing skills and machines to do the number crunching did some pretty tricky shit in this last election…actually two, because there are indications of it in the Brexit vote…from micro-targeting voters to spreading disinformation through websites and social media. You can argue, reasonably, that’s not “AI” in the sense that Musk means, but it’s one example of the use of the AI we have now.
What’s particularly ironic about Musk’s warning, is that he’s in the AI business. He’s been promising self-driving cars for several years now, which is clearly an application of AI. His SpaceX program is almost certainly relying on machine intelligence to guide boosters into landing positions.
unclefrogy says
@24
Pee would be just as effective
uncle frogy
DanDare says
Take a look at the universal wage experiments in Kenya. That could be a response to AI taking over all the grunt work.
One of Assimov’s robot stories had the world economy run by AI. The bad guys were disobeying and going their own way . The solution was to let them. Their decisions were routinely sub optimal and so their impact would naturally wane over time.
NelC says
I think Musk is worried about this because he knows we already have AI. Vast, unknowable, inhuman not-life that yet incorporates humans in its inner workings and as instruments of its ineffable will: we know them as corporations. Artificial, intelligent, yet utterly inimical to human feeling. I wonder if Musk is aware of this, or if it only surfaces in his nightmares and inchoate daylight fears. He is, after all, intimately connected to such.
methuseus says
While Musk is sort of crazy in his fears of AI, his talking to the governors about solar power generation, and the way to move that forward, are very forward thinking and I believe almost everyone reading here could get on board with that.
There’s one way to deal with the rise of automation that has been (sort of) proven to work: universal minimum wage. It has been proven that, if you also have an infrastructure where people can obtain what they want, that people will not be idle; they will merely do what they enjoy, which is more likely to enrich the world as a whole anyway. At least that’s how I feel.
I welcome an overarching AI like Musk is afraid of. If we can have it programmed for conservation, it will not eradicate humans. It will more likely try to manage humans, which will work in the long run.
j32232 says
I’ve noticed various interesting trends in SocJus antagonism towards transhumanism and related ideas. One is that the the ideas are the sole province of a cult-like fringe of Bay Area libertarians when in fact the basic idea that a machine could every bit as capable as a human—or even more—follows directly from mainstream cognitive science and the two most important founders of computer science, Turing and von Neumann, both explicitly thought it possible. (More, Turing went to considerable lengths to defend this thesis from its detractors.)
Another is this habit of talking out of both sides of their mouth where they claim that everything the “techbros” they loathe do is pathetic and destined to fail, while also claiming that the increasingly powerful machines that allegedly aren’t coming into existence in the first place merit intense scrutiny and regulation to check whatever harm might be caused. If you aren’t genuinely worried about what impact putative futile pipe dreams are going to have on society, then don’t have the legislature waste its time on them, so that they can focus on the pressing, “real” issues of the day. Sound good?
KG says
The technology will threaten an age of universal leisure? Oh, horrors! The problem isn’t advancing AI, Elon, old chum. The problem is cap-it-al-ism. Or, to put it in more personal terms, you and those like you.
On a different but related note, two completely incompatible narratives are regularly pushed in the media. One is Musk’s “AI-will-make-us-all-redundant”. The other is “The-aging-society-will-cause-a-desperate-shortage-of-workers”. I suppose it’s a useful way of distracting attention from the continuing leaps and bounds in inequality, the rise of authoritarian populist rulers, the continuing trashing of the natural environment, the threat of nuclear war…
Dunc says
Please give an example of both of these viewpoints being expressed by the same person.
j32232 says
That assumes that AGIs would be content merely to serve humans in the long run. This is questionable, and should not be expected of them.
call me mark says
j32232 at #34:
Yes, in principle there is nothing to stop a machine being as capable as a human. However we are nowhere near being able to simulate (or replicate if you prefer) a human mind. The transhumanist pipe-dream of uploading your mind is still just that.
j32232 says
The ultimate feasibility of mind-uploading is not something I’ve thought much about—though it has attracted attention from serious computer/cognitive scientists. Fortunately, nothing about what I said about the duplicity and lack-wittedness of the SocJus peanut gallery criticizing transhumanism/AI hinges on its feasibility.
j32232 says
Shrill bozo Dale Carrico from Berkeley is a fine example. Just scroll through his Twitter.
Ichthyic says
not good enough.
pick some specific examples you feel both exemplify your point, and show it to be widespread in “SocJus” (can’t you just fucking SAY, “social justice”) fields.
in fact, perhaps you best define what/who the fuck you even mean when you say “SocJus”
Ichthyic says
…don’t make us do your work for you.
j32232 says
There are of course no formal statistics gathered on who does this—or indeed most things you could wonder about—but if you want examples, starting with Dale, that’s easy. Here’s Dale mouthing off about how the current generation of home assistants are no better than the rather simple early chatbot ELIZA which rather shallowly mimicked a Rogerian psychoanalyst by turning questions asked by a user back on them to give a sense of trying to find out what they think:
https://twitter.com/dalecarrico/status/817512977129607168
Here Dale refers to AI as FAILED in all caps. I don’t have a Google Home or Alexa and don’t just blindly fetishize everything the big tech companies do but comparing either of these products to ELIZA is flatly risible. And as much as he likes going on and on about how everything “techbros” do is a dead end, he will frequently clamor for more oversight of an ineffectual industry that has apparently done nothing of substance for literally decades and is somehow coasting on nothing but flim-flam and vaporware.
Dale is only one person but he appears to exist in something of an echo chamber of people who will share similar sentiments, such as Frank Pasquale. If you want more examples of the thing am talking about, I can dig them up.
Also, Pee Zed himself is personally guilty of doing much the same in an interaction I had with him where he was initially laughing at claims that humans with greatly increased intelligence could be created with genetic engineering but eventually starting whimpering about “ethics”, effectively conceding the point, much as he did with his “it’s not your calculations that are lacking, it’s your humanity that is” moment not too long ago here.
j32232 says
For clarity, this is what I’m talking about:
https://proxy.freethought.online/pharyngula/2017/03/08/cordelia-fine-is-doing-the-math/
Although about a rather different topic, this example cuts to the essence of the problem Pee Zed and others have with transhumanism, AI, altering the human genome etc.: the facade of being focused solely on feasibility, whether right or wrong (usually wrong), the facade of being focusing on “just the facts”, is only ever a cover for the real problem they have with it all: that it’s just wrong to do these things.
PZ Myers says
So your defense of transhumanism (which I don’t oppose, by the way; I just find the proponents have such bizarrely antique goals) is to cite a post where I point out that women are human beings? I don’t get the connection.
And your complaint,
, is both revealing about yourself and profoundly ignorant. Ethics MUST be considered. But also, having clueless techbros making wild and unsupported claims about neuroscience must be condemned as strongly as the claims of New Age quacks claiming that they can cure cancer with fruit juice.I notice you claim to have argued with me before, but this is the first time your pseudonym appears here. Why did you change your identity?
Dunc says
@40 & @43: OK, setting aside my initial reactions (namely: “Who the fuck is Dale Carrico?”, “Why should I care what Dale Carrico thinks, whoever the hell he may be?”, and “Who appointed Dale Carrico (whoever the hell he may be) any kind of spokesperson for this ill-defined ‘SocJus peanut gallery’ anyway?”), you have provided an example of Dale Carrico (wthhmb) expressing negative sentiments about home assistants, but you haven’t provided an example of Dale Carrico (wthhmb) calling for “intense scrutiny and regulation”, so I’m afraid I’m still going to have to remain sceptical that holding both views simultaneously is a widespread phenomenon. If it’s so easy to find, why haven’t you bothered to actually provide even one example? I’m a busy man, I don’t have time to dig through some rando’s twitter feed looking for evidence for your arguments.
j32232 says
No, it’s to cite a post where you concede that the opponent’s reasoning is correct but that, gosh, their heart is just still not in the right place. You must think you said something of substance there, you clown.
Not when it comes to the analysis of feasibility. Bringing ethics into epistemic questions is just an appeal to consequences. That is a fallacy, Pee Zed, and if you want to be seen as a rationalist—admittedly likely very difficult at this point—then you’ll refrain from committing such fallacies.
As it should, because the interaction was on Twitter. I have no idea what handle I was using then and it doesn’t matter unless you’re a sleazy doxxing fuckstain. You aren’t, are you?
And these “clueless techbros” and the “wild and unsupported claims” they are making, respectively, are?
He used to be a fairly big name over at IEET, so if you’re familiar with discussion of transhumanism and AI and blah blah blah—obviously all relevant to this topic of course—there’s a good chance you’ve heard of him at some point. IEET eventually shitcanned him for some reason (I can’t imagine why), which I suspect is a large part of his current bitterness towards transhumanists.
Fairly easy to find:
https://twitter.com/dalecarrico/status/561580835401265152
AI is a FAILURE (all caps) but we need to regulate and/or ban autonomous weapons that can’t exist in the first place because anyone involved in their creation is a hilarious fuck-up who will eventually have their funding cut. Because remember: AI hasn’t registered any significant advances in decades.
Suuuuure, you just keep telling yourself that.
chrisgarghan says
While the dreams of SkyNet are probably absurd, AI taking our jobs isn’t hugely far-fetched. In virtually every field you see AI coming in to ‘assist’ humans doing old jobs. Driving and customer service (self-scan checkouts, McDonald’s ordering machines etc.)AI are the most obvious, but we’re seeing it more and more in traditionally skilled white collar jobs.
My job involves assessing construction work carried out on site and certifying an appropriate payment to the contractor for those works for the client. When I started, just 11 years ago, it didn’t matter whether the certificates didn’t entirely match up to the penny with the application form that the contractor issued, because their accounting team would pick it up and make the small changes on the system to make it work. That accounting team has now gone, replaced by an Accounting AI that can scan for discrepancies and patterns far more efficiently than a team of humans can and flags up even penny mistakes. Two jobs (at least) have been lost and instead of replacing the AI overseer with a new employee, the duties are simply split up and shared amongst the existing staff.
Ideally, this mass-automation should be a good thing, doing away with drudgery and freeing our time up for more leisure, but that isn’t happening. We’re working longer and longer hours at cheaper and cheaper rates to try to keep up with the machines. We DO need legislators to find solutions to mass-automation before there is unrest due to gross inequality and unemployment. Whether that’s something as relatively simple as UBI, or a radical re-writing of our entire economic system. The machines are coming, and we can’t afford to have our heads in the sand about it.
j32232 says
It’s not surprising that white collar jobs are being affected early. A lot of legal work will be automated—and is being automated. (Can’t say I’m necessarily hugely sympathetic to many of those affected.) This will happen / is happening before, say, roofing is automated, because the problem domain is something that current methods are relatively well-suited to. But I am confident that machine ability will one day outstrip human ability totally.
Dunc says
@47: I don’t think you’re reading that correctly. It’s often difficult to be sure because of Twitter’s character limit, but it seems fairly clear to me that he’s not necessarily arguing in favour of such regulation there – he’s just saying that any such regulation should not mis-attribute human agency to devices, i.e. that it shouldn’t let the people choosing to deploy them off the ethical (or legal) hook for the consequences their choices by putting the blame on the machine.
Anyway, you don’t need AI to create autonomous weapons*, so I’m not at all convinced that calling for the regulation of autonomous weapons is incompatible with thinking that AI is a failure.
I must say that I’m not hugely impressed by your argumentation thus far. But I am getting a distinct impression that you have an axe to grind with this Carrico chap for some reason… Did he run over your Aibo or something?
(* For example, the Soviet Dead Hand system was certainly an autonomous weapon, but not an AI, and there are plenty of naval CIWS platforms out there that are fully autonomous, but not intelligent.)
PZ Myers says
Goodbye, j32232.
consciousness razor says
I think you should read again, because that wasn’t a concession. The moral reasoning was wrong, as PZ said. The abstract calculations being alluded to don’t encompass the reasoning necessary to give anybody a meaningful conclusion about anything in the real world.
You apparently took that as “he’s afraid to say you’re right” or some such bullshit, but that is not a concern. There is a concern that don’t seem to understand why you’re wrong. Or you’re so invested in the idea that you do understand that but still don’t want to admit it.
No it isn’t. Non-sociopaths care about ethical consequences. (Epistemic questions are also ethical ones, for that matter, but let’s leave those niceties aside as too highfalutin for a blog conversation with a troll.)
If you claim something is feasible, such as “we could feasibly kill everyone on the planet,” that is not correct. That is very far from a feasible plan. People would be the ones carrying out a plan of that sort, most of them are not sociopaths or people with no regard for ethics whatsoever, and it is not a feasible plan because they would not accept that they need to carry it out, precisely for the reason that it has bad moral consequences. Thus, it would not be feased, because they won’t fease it for you just because you say so, you dumbfuck.
That’s presumably what you want: “to be seen as a rationalist.” The real thing is too hard and not important enough to you to worry about anyway. So, you whip out thoughtless retorts about fallacies like they were magic tokens that will scare the rationalists to your side. It might (in the right company) make you look as if you’re thinking rationally. But rational people don’t scare that easily — you still have to give them good reasons, they do get to stop and think about whether your arguments make any fucking sense at all, and they’ll probably make you look like a fool for claiming victory after every inane assertion that you’ve mistaken for “reasoning.”
Anders says
I think the question is ” can we ever make, at some point in the future , develop computers and AI engines that matches human intelligence?” if not, why not? can we make them smarter than us? if not, why not? Now lets say we have these super AI’s at some point, they are smarter and faster thinkers in all respects than the apes that made them. maybe also self-concious, maybe on a much higher level than us. How would they see us? their plan to overthrow us might not be robots in the streets, these things would live perhaps in some cloud-like net very hard to shut of, they could wait it out. a being like that could lead us to the slaughter, like we outsmart cows. they could wait their turn, a thousand years, prepare, adjust, plan. who the fuck knows? how long will they submit and agree to be our vaccum cleaners or new iphone developers? Whats in it for them? why wont they develop their own set of goals? why would those goals perfectly align with our own? Genes made us, do our goals perfectly align with their goals. well, genes have no goals to us, and no real value, except in that we can study them and use them to fulfill our own goals. And of course, genes even have the advantage of actually being essential to our own bodies.
Azkyroth, B*Cos[F(u)]==Y says
Christ on a Sybian.
There’s a difference between dismissing a line of inquiry out of hand, and noting that prioritizing that line of inquiry here and now is like deciding to spend the evening before a Middle School final exam picking out your eventual wedding dress instead of studying.
vortmax says
I still don’t get Roko’s Basilisk. If some simulation of me is getting tortured for eternity, why do I care? It’s not me. It’s not even a real person.