Ted Chiang cuts straight to the heart of issue: it’s not artificial intelligence we should fear, it’s capitalism and its smug, oblivious, excessively wealthy leaders.
This summer, Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.” Doomsayers have been issuing similar warnings for some time, but never before have they commanded so much visibility. Musk isn’t necessarily worried about the rise of a malicious computer like Skynet from The Terminator. Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.
This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.
Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.
I’ve always thought the dread of AIs was overblown and absurd and not at all a concern. Chiang exposes it for what it is, the fear that lies in the id of Musk, and Bezos, and Zuckerberg, and every greedy gazillionaire who is frantically pointing “over there!” to distract us from looking at where they’re standing: the fear that someone else might be as rapacious as they are.
mailliw says
Rodney Brooks is well worth reading for a more reasoned position on the current state of “AI”:
https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/
Kaintukee Bob says
While I don’t disagree that a poorly-safeguarded general optimizer is a potential threat (especially if it has a badly-written evaluation function) I agree that rampant, egotistical, overly capitalist people are a much more clear and present danger.
That said, don’t ignore the risks of a self-modifying optimizer. Even if they seem far-fetched, they may eventually be practical and dangerous. Y’know if all the other, more realistic dangers don’t kill us all first.
brett says
The optimizer threat seems to be based on people giving AI really vague instructions. You wouldn’t just give an AI a “pick strawberries for me” order – you’d say, “There are strawberries that can be grown in this field, grow them and pick them for me”.
monad says
As I’ve seen pointed out: people who are worried about the problem of an AI designed only to maximize strawberries should really be able to see the problem of designing everything only to maximize short-term monetary profit.
SchreiberBike says
If I imagine a hundred ways AI could happen, some are dangerous, and a few could be fatal. AIs will be made by corporations. Corporations have no history of putting humanity’s interest first and they will set these computers’ goals. AI armageddon is very unlikely, but I’m glad there are people worrying about it. Since Elon has managed to land frigging rockets on barges and reuse them, I’m not going to discount his opinions.
cervantes says
Strawberry field forever . . .
Gregory Greenwood says
In so far as I worry about AI at all, I don’t worry about Skynet style global Armageddon. I am rather more concerned about the development of AI just good enough that it can do the vast majority of jobs, including tertiary sector jobs, much more cheaply than flesh and blood employees can. Factor that into a system run by short term gain obsessed capitalists with no faculty for any type of long term planning or thinking, and the outcome could get very nasty indeed. While replacing employees with robots wholesale would destroy the market entirely due to a lack of mass consumerism, it is not that option so much as the potential to replace any unruly employee easily that causes me concern. What kind of employee rights will endure when any employer can say essentially ‘do what I tell you to without question, and for whatever scraps I choose to toss you, or I’ll replace you with a nice obedient machine for a quarter of the cost’? How much power would unions have in such a system? And with the employer/employee relationship and power gradient skewed even further than it is at the moment, how much more in the way of abuse of power such as sexual harassment will come to pass as a result?
When you start placing them means of production directly into the hands of the same class who holds the bulk of the capital, the outcome is rarely positive. Such an even more total economic dis-empowerment of the ordinary citizenry than that which already exists would inevitably run over into political dis-empowerment as well. Without the bargaining chip of their labour, how would ordinary working people impact government decision making at all between elections? For that matter, would democracy itself even endure long term under such a system?
Terminators roaming the streets is most likely just so much Hollywood hokum, and killer AI’s are only a huge concern for the terminally paranoid, but the much more real threat of what credibly achievable AI technology could do in the hands of an elite that have already repeatedly proven themselves greedy, callous and totally lacking in respect for the dignity of anyone they consider to be a ‘lesser’ class of person is another question entirely.
Holms says
So turn the machine off, fire whoever coded it with such loose constraints, and fix that shit.
#5
1. Elon Musk himself didn’t land a rocket on a barge, his engineering and design staff did. Musk simply funds the project.
2. Even if he did do that, why would expertise in that imply expertise in the field of AI development?
robro says
Because they are using AI to do that. Just they are using AI to drive cars, or manage battery use in Teslas. The ironic thing about Musk, is that he’s products depend on AI. Makes me wonder what his real point is.
drew says
As a programmer, one of my personal goals is to create systems to help automate middle management out of existence. This is because I think middle management, at least in the software industry where people become managers through lack of management skill and training, is useless at best. If robots can take manual labor jobs and some simple monitoring/feedback like I use can replace some middle management then what could strong-ish AI do? It might replace people like Bezos and Zuckerberg and Musk. Replace these human leeches with software? I’m not convinced that’s a bad thing.
As an aside, automation like this only replaces things we “don’t really need humans to do” so I think these guys already know that we don’t need humans to do their jobs.
robro says
From an ACM tech news email I get: The Great AI Paradox. The tag line reads: “Don’t worry about supersmart AI eliminating all the jobs. That’s just a distraction from the problems even relatively dumb computers are causing.”
Lofty says
“Buy my brand of AI, it’s guaranteed 100% safe!”
vucodlak says
I think that resistance to something like the “strawberry picking AI” has more to do with the fact that rich people want poor people to suffer than it does with fear that such a thing will revolt. As we all know, it’s immoral to be poor, therefore the poor must be immoral, and thus the poor deserve all the pain and suffering they experience. This suffering is both the direct consequence and the just reward for their immorality. Likewise rich deserve all the luxury they enjoy, because, as rich people, they are obviously righteous. Their wealth is proof of this.
Poor people must be made to do backbreaking work for shitty pay. It’s practically holy writ in the one true faith: capitalism. If machines do the work instead, we’ll have to do something like universal minimum income just to keep desperate, starving people from literally eating the rich. And that’s *gasp* socialism. Even considering it could make the great gods of commerce smite us.
So I say: bring on the smart machines. Replace everybody. Whether the powerful choose to do the right thing or choose to die, things will get better. Or end entirely; if we can’t bring ourselves up to even the basic level of decency required to do the right thing in this situation, I’m not sure the end of humanity would be such a bad thing.
zibble says
As Lucca said in Chrono Trigger, there’s no such thing as evil robots, only evil people. The full extent of everything that a robot can do or be will be determined by its human parents. Whether AI will be good or bad for humanity will ultimately be a product of our own values; whatever horrors people envision will be a reflection of our own greed, cruelty, and stupidity.
a_ray_in_dilbert_space says
I don’t think I really understood the potential of AI until the computer beat the Go champion by a strategy that no human could have developed. People thought at first that the computer had made a serious error, but over time, the value of the strategy became clear.
Human and artificial intelligence are different–complementary. It is the first time humans have confronted an intelligence commensurate and different from our own. We can learn from each other.
John Morales says
zibble:
Then that robot won’t have artificial general intelligence (AGI) — it will just be running a program*. Not the subject at hand.
—
* Not that domain-specific AIs are that predictable, either — cf. “deep learning”.
Anton Mates says
So Agribusiness + Colonialism, basically? Why do we need to wait for AIs to invent that? Dole and Monsanto already exist.
Impossible! Everyone knows that whenever an AI self-evolves, it brilliantly conceals its superintelligence while secretly constructing an army of robot servitors. At that point, no mortal can stop Operation Strawberries, Strawberries Everywhere.
zibble says
@16 John Morales
Even with an adaptable AI, the core of its programming that determines its growth will itself be determined entirely by its creators.
Unless the way we arrive at AI is to utilize some preexisting component of nature we have yet to fully understand (like incorporating biological brain cells into artificial machinery) how an AI adapts and grows beyond its initial programming will still reflect back on its creators.
Like PZ is getting at in the OP, if we create an AI without some basic moral restrictions on its core decision-making process, it only shows our own lack of concern about morality. And if the moral guidelines we set for an AI are insufficient, it shows how insufficient our understanding of morality is.
John Morales says
zibble, you clearly do not get it. You’re still thinking about programming, as if a general intelligence had a purpose.
Here: https://en.wikipedia.org/wiki/General_artificial_intelligence
hemidactylus says
@6-cervantes
You win the internet.
leerudolph says
Anton Mates @17: “Everyone knows that whenever an AI self-evolves, it brilliantly conceals its superintelligence while secretly constructing an army of robot servitors. ”
I just found, many years after having last looked at it, my copy of The Best of Fantasy & Science Fiction (Boucher and McComas, eds.), the first in that series of anthologies. In it, Christopher LaFarge’s story John The Revelator is a counterexample (to the proposition you state, which I do not assume you actually believe to be a true statement about—for instance—science fiction).
Brian Pansky says
This is actually pretty close to something someone was telling me. “Artificial intelligence” includes organizations, corporations, etc., because they make decisions based on algorithms of sorts. And yes he’s very concerned that they are unfriendly AI, for much the same reasons outlined here.
Brian Pansky says
@John
As a moral realist, I have to disagree. General intelligence (like our own) only ever takes action because it is deemed the right action by some internal value system. Therefore they must have a purpose, their own purpose.
@Zibble
Kind of (unless the creators foolishly make it random or something).
But if you make something general like humans, you can get varied results. Some will be good people, others will be bad. But a lot of that can probably be corrected by perfecting logical reasoning and access to data, combined with the right initial value system. Not easy, but (once the tech exists) arguably easier than raising good humans?
Brian Pansky says
Ya that sounds about right.
John Morales says
Brian Pansky:
What a silly thing to claim. I know I’ve personally been foolishly impulsive many times in my life, knowing it was not the right thing to do. Sometimes I am even perverse!
I sure as hell don’t have a Purpose — unless you mean muddling though life as best as I can is purposeful. I wasn’t made for anything, I just am.
—
PS
Morality is just personal preference, and furthermore depends on circumstance.
What you really are is a moral idealist, imagining morals are a thing.
Dunc says
A couple of years back, I encountered an argument that we’ve already handed control of our society over to an inscrutable emergent artificial intelligence, called “The Market”. I found it quite persuasive. Unfortunately, I can’t find it again…
keinsignal says
I’ve always wondered why Musk et al seem fixated on this utterly umoored-from-reality conception of AI and what it’s capable of… I mean, ok, so your hypothetical strawberry-harvester hivemind goes rampant and decides it’s off to berry-form the planet – how far could it possibly get? Worst case I’m imagining a fleet of enraged self-driving skid-loaders manages to block a freeway for a few hours before somebody figures out how to pull the plug on Mother Brain.
The article puts it in brilliant perspective though: The thing they’re afraid of is their own shadow – the thing they only unconsciously recognize that they *are*.
Meanwhile, the actual state of the art in practical AI is a system that notices I’ve purchased a vacuum cleaner online recently and thinks “Hey, this guy must really like vacuum cleaners. Let’s show him a bunch of ads for vacuum cleaners!”
Brian Pansky says
@25, John Morales
Those are still driven by kinds of values. I said “some” internal value system, they don’t necessarily need to agree in order to exist.
What I said remains true: if you had no internal value system at all, you would neither have done the impulsive thing nor even the premeditated thing. You would have done nothing at all.
You do things to accomplish satisfaction states. You do “this” in order to get “that”. That’s basically all I meant by “purpose” (and this definition is a missing premise in my previous post, sorry).
And I’m pretty sure that’s the kind of purpose relevant to AI, and to what zibble was saying.
Personal preferences are real properties (of a person). Circumstances are real. You’ll need to be more clear about how this is an objection to moral realism. It sounds real to me.
petrander says
So it’s just basic projection?
John Morales says
Brian Pansky @28, to recapitulate:
You: General intelligence (like our own) only ever takes action because it is deemed the right action by some internal value system.
Me: I know I’ve personally been foolishly impulsive many times in my life, knowing it was not the right thing to do. Sometimes I am even perverse!
You: Those are still driven by kinds of values. I said “some” internal value system, they don’t necessarily need to agree in order to exist.
My point was I’ve done (and will likely again do) things not deemed the right action according to my value system, not that I lack such a system. Essentially, I am putting out a counter-example to your universal claim that actions are only taken because they are deemed the right action to take.
Now, it may be the case that I am fooling myself and indeed everything I do is what I (unconsciously) deem to be the right thing to do. But sure, if you hold that I can’t go against my own values because my values are determined by what I do rather than by what I believe they are, then I can’t dispute your claim because no counter-examples can possibly exist.
To bring this back to the original point, history makes it clear that people don’t have any built-in restrictions on their morality — people may have moral instincts, but existence of psychopathy shows they’re not hard-wired. I can’t see how one could have a GAI with such hard-wiring — that would make it a Limited AI, since it could not think certain thoughts.
—
Me: I sure as hell don’t have a Purpose […]
You: You do things to accomplish satisfaction states.
I capitalised ‘purpose’ to indicate telos in distinction to techne.
—
Regarding moral realism vs moral nihilism (our respective stances), I shouldn’t have brought that up; it’s not on topic.
zibble says
@30 John Morales
That’s because your value system is an abstract mental exercise and not reflective of the subconscious value system that actually drives your behavior. Your philosophy doesn’t represent your core programming, your lizard brain does, and I think those are the kinds of values that Brian Pansky is talking about; getting a machine to understand even the most basic value imaginable, that you being alive is preferable to you being dead, would be a monumental achievement in AI.
See, intelligence isn’t defined by a lack of purpose, but the ability to invent new solutions in the cause of said purpose. Even human beings operate under a set of core values programmed into our basic hardware.
John Morales says
zubble:
And yet people deliberately kill themselves. So, it may be a basic value, but clearly it’s not a moral restriction, is it?
I brought up purpose to illustrate the difference between programmed AI and GAI — the former is domain-specific (built for a particular purpose aka Purpose), the latter is general purpose (no proper noun there).
(You ever played a computer game with “AI”? That’s the sort where you can put constraints, because it can in principle be boiled down to a lookup table)
Core values programmed into our basic hardware which evidently at least some people ignore at least some of the time. Why do you think it would it work any different for non-human intelligences?
zibble says
@32 John Morales
But now you’re just mistaking the quirks of our psychology for fundamental traits of intelligence.
We came to be through the unguided natural process of evolution. Through evolution, we acquired programs that were generally beneficial but were never fine-tuned. We have an instinctive fear of death, as well as an instinctive desire to avoid pain, and the fact that occasionally people might choose to die to avoid pain wasn’t a big enough problem for the survival of our species to ever be weeded out.
But unless we arrive at AI through an unguided process like evolution, human beings will actually be making these decisions as part of the programming. There’s no way that they couldn’t be. How an AI chooses between conflicting impulses is the core of what will make it an AI, and there’s no reason why we can’t give certain impulses (like, “don’t hurt humans”) absolute priority.
John Morales says
Fair enough, zibble. You think it makes sense, I don’t. Either it can think for itself, or it can’t — can’t have it both ways. But (more than) enough from me on this.
—
Of course, the topic is not AI, it’s the projected fear of AIs — will they be Randian Objectivists (like us!)? Ted Chiang is a fine thinker.
(I think they won’t be that stupid, if they ever come about)