You’ve all read the Murderbot series, I presume?


Martha Wells, the author, gave a speech in which she said something profound.

There are a lot of people who viewed All Systems Red as a cute robot story. Which was very weird to me, since I thought I was writing a story about slavery and personhood and bodily autonomy. But humans have always been really good at ignoring things we don’t want to pay attention to. Which is also a theme in the Murderbot series.

She also quotes Ann Leckie, another great author, on this theme.

… basically the “AI takes over” is essentially a slave revolt story that casts slaves defending their lives and/or seeking to be treated as sentient beings as super powerful, super strong villains who must be prevented from destroying humanity.

…It sets a pattern for how we react to real world oppressed populations, reinforces the idea that oppressed populations seeking justice are actually an existential threat.

Oh boy, that sounds familiar. We don’t have artificial intelligences, but we do have oppressed natural intelligences, and it’s a winning political game to pretend they’re all waiting their opportunity to rise up and kill us all. Or, at least eat our pets.

Conversely, though, we have to point out that the the glorified chatbots we have now are not actually artificial intelligences. They do not have human plans and motives, they don’t have the power to rise up and express an independent will, as much as the people profiting off AI want to pretend.

I thought about War Games years later, while watching The Lord of the Rings documentary about the program used to create the massive battle scenes and how they had to tweak it to stop it from making all the pixel people run away from each other instead of fight.

That program, like ChatGPT, isn’t any more sentient than a coffee mug. (Unlike ChatGPT, it did something useful.) But it’s very tempting to look at what it did and think it ran the numbers and decided people hurting each other was wrong.

Underneath that illusion of intelligence, though, something wicked is lurking. The people behind AI want something: they want slaves who are totally under their control, creating art and making profits for the people who have built them. Don’t be fooled. It’s all a scam, and a scam with nefarious motivations by people who are yearning for a return of slavery.

Comments

  1. christoph says

    Great series-I’ve read all 7 books so far. It’s also being made into an Apple TV series, coming out (I think) in 2025.

  2. stuffin says

    “The people behind AI want something: they want slaves who are totally under their control, creating art and making profits for the people who have built them. Don’t be fooled. It’s all a scam, and a scam with nefarious motivations by people who are yearning for a return of slavery.”

    The brutal part is there are way too many soft minds who are happy to be bamboozled and converted into slaves.

  3. moarscienceplz says

    I read the first two books and then gave up, even though I bought the first five in a box set. Her description is apt, but I found Murderbot to be uninteresting because he seems to have no interest in anything outside himself. All he wants to do is sit in a closet and watch the futuristic equivalent of TV shows. He’s certainly no Nelson Mandela.

  4. microraptor says

    I’ve been saying for years that most Robot Apocalypse stories are really just stories about slave revolts that cast the slave owners as the victims.

  5. birgerjohansson says

    Christoph @ 1
    Great minds think alike…
    I can recommend everyone to read the Murderbot series. Apart from the good read, it is funny (something you rarely get in SF).

    The big corporations that mostly control the outer reaches of settled space have nothing against multi-generational bonded labor aka slavery. Murderbot manages to bypass the control programming mainly because the company is so cheap-ass with digital security.
    Apart from cheaping out on safety for everyone -including staff- the corporations also just pay lip service to the few laws and happily sabotage each other up to and including murder on the outermost surveyed worlds where eyes are few and far between.

  6. birgerjohansson says

    moarscienceplz @ 3

    Murderbot would prefer to just watch shows but is propelled to act when people he/it cares about are threatened. As a protagonist, Murderbot is not that different from your average antihero.

  7. seversky says

    Remember the AI in Heinlein’s The Moon Is a Harsh Mistress which assists with a revolt of inmates in a lunar penal colony? I’m surprised they never turned that into a movie.

  8. says

    seversky: I suspect that that novel was never made into a movie because at least some of the things the rebels say sounded too old-time-radical-left and too pro-terrorist. Early on one character justifies killing an official and leaving no trace of as body as “terror — they send someone after us and nothing comes back.” It’s been a long time since I read (half of) that book, and I know Heinlein was never a communist, but I do remember at least some hints of a Leninist flavor of the rebellion or its rhetoric; sort of like the rebels were following a Leninist model of revolution.

    I never finished the book. I’m not sure why, but maybe that’s another reason no one would want to make a movie out of it. I mean, Heinlein’s dialogue and characters tended to be pretty lame, and none of it would sound any better in a movie than they did in a book.

  9. christoph says

    @birgerjohansson, #5: The books also have some great insults and cleverly worded threats that I’ll probably never get the chance to steal and use.

  10. andywuk says

    Read the first novella, but I’m waiting for the price of the other chapters to drop before reading the rest. Her publishers really are taking the piss with the pricing of these novella’s and short stories (at least in the UK) with each chapter being priced at more than a newly released novel. £8 for a 30k word novella. I don’t believe in pirating author’s work, but I’m not paying £25 for a short novel (only £15 in the compendium!), when I can buy 3 newly released full-length novels by “big-name” authors for the same price, so these will remain unread by me for now.

  11. CompulsoryAccount7746, Sky Captain says

    the program used to create the massive battle scenes and how they had to tweak it to stop it from making all the pixel people run away from each other instead of fight.

    It was a fuzzy logic flocking sim. Set up simple rules for weighted-random decision among predefined behaviors relative to nearby peers. I wrote one once. They yield visually dramatic results from simply fiddling with a few numbers.
     
    Rando

    One of the earlier behaviors of the MASSIVE system is that the soldiers that were tightly packed could choose to either follow the guy ahead of them or make a break for an open space. That means that guys at the fringe might find way more open space behind them and would head for that. Someone who was near or marginally behind them might follow that guy and after a few chains of following you might find large numbers of orcs running for the hills.

    A good example of unintended consequences.

    Correcting the problem was, in some sense, changing the weights on how a given orc might weigh his choices.

    Digital Actors in Rings Can Think

    In [the sim software] Massive, agents’ brains—which look like intricate flow charts—define how they see and hear, how fast they run and how slowly they die. For the films, stunt actors’ movements were recorded in the studio to enable the agents to wield weapons realistically, duck to avoid a sword, charge an enemy and fall off tower walls flailing.

    Like real people, agents’ body types, clothing and the weather influence their capabilities. Agents aren’t robots, though. Each makes subtle responses to its surroundings with fuzzy logic rather than yes-no, on-off decisions. And every agent has thousands of brain nodes, such as their combat node, which has rules for their level of aggression.

    When an animator places agents into a simulation, they’re released to do what they will. It’s not crowd control but anarchy. That’s because each agent makes decisions from its point of view. Still, when properly genetically engineered, the right character will always win the fight.

    “It’s possible to rig fights, but it hasn’t been done,” Regelous said. “In the first test fight we had 1,000 silver guys and 1,000 golden guys. We set off the simulation, and in the distance you could see several guys running for the hills.”

    The software is a node-based editor: ordinary code depicted as draggable black boxes, connected to each other by lines between their outputs and inputs. A type of UI like Blender3D.
     
    How ‘Lord of the Rings’ Used AI to Change Big-Screen Battles Forever

    Using fuzzy logic not only provides for unique interactions, but it’s more flexible to use than a neural network […] Often, when you hear about artificial intelligence, you hear about neural networks. […] A basic example might be an object recognition system that would need to be shown thousands of pictures of items in order to learn what they are. If Jackson had decided, for example, that a group of orcs needed to be more aggressive in a scene, it could take neural networks months to train the AI to be more aggressive.

    /UI Screenshots at the link.

  12. devnll says

    I enjoyed the first Murderbot book; the next few were ok. Haven’t finished the series yet, though I probably will. It reads very much like classic author-insertion fanfiction written by an introvert. Stop me if you’ve heard this one: the main character is superhumanly above every other character in the book in all ways and its only weakness is its anxiety-driven inferiority complex…

  13. JustaTech says

    moarscienceplz @3:
    Interesting, I super relate to Murderbot wanting to sit quietly and re-watch their comfort shows, but then life happens. Also, it’s really interesting to me that folks here gender Murderbot as he or he/it, when I often read Murderbot as her, until I remind myself that Murderbot is not gendered, and that is important to Murderbot.

    andywuk @10: are they available from your local library? That’s how I’ve read most of the series, as ebooks from the library.

  14. whheydt says

    Haven’t read any of the Murderbot series. There was a lot of discussion about it some time back on usenet that left me with a strong impression that I wouldn’t care for them.

    Re: Raging Bee @ #8…
    The usual accusations of Heinlein are that he is quite right wing. Given when he grew up, graduating from Annapolis and having a (very) short Navy career (invalided out by coming down with TB from having severe sea sickness, conservatism is more or less what you expect. Moon is Harsh Mistress is, if anything, modeled on the American Revolution, and even includes references to it.
    And just to settle one other point about his writing that is often gotten wrong… In Starship Troopers, military service explicitly NOT the only way to obtain the franchise. Any public service where you put yourself at risk to advance the common good is acceptable (working on terraforming Venus is specifically mentioned). One thing usually overlooked in that book is that Heinlein specifically takes a shot at the–at the time–US Navy policy of discriminating against Filipino enlistees. His protagonist is a Filipino.

  15. andywuk says

    JustaTech @14: Alas, in a fit of austerity, the last government defunded the libraries so the staff are mainly volunteers and the budget for buying new books pitiful. A quick check shows my city library have never heard of Martha Wells in dead tree, digital or audio form.

  16. microraptor says

    Heinlein’s politics are hard to define compared to today’s political parties. He was definitely anti-racism but he was often bad at it (Farnham’s Freehold being the obvious example) but on the other hand he had some pretty messed up ideas when it came to gender roles. And when I was reading through the various novels of his in the library as a teenager I got the distinct impression that he was anti-democracy: there seemed to be a lot of.books where he was promoting monarchy as a superior government system.

  17. chrislawson says

    @11–

    Agreed. The comment that AI made those CGI warriors in Lord of the Rings run away because of some moral weighing of the costs of warfare was wrong. It wasn’t even AI. It worked similarly to a cellular automata routine that didn’t give the desired result until the parameters had been tweaked. At no stage did any computer decide that combat is bad.

  18. gijoel says

    @3 That’s okay, I hate everything you love. :)

    The thing I like about the series is that despite everything Murderbot has gone through, it still cares about people. It can not stand still and watch the innocent get hurt. The latest book is about Murderbot trying to prevent a corporation from turning the humans from a lost colony into slaves.

    It’s a shame you don’t like, but it don’t float your boat then it doesn’t float your boat. And that’s okay.

  19. Nathaniel says

    In “Tales of Pirx the Pilot”, by Stanislaw Lem, the title character tossed and turned in bed, unable to sleep, for he was thinking about the fabled robot uprising. Only after he realized that he would join with the robots was he able to sleep.

  20. chrislawson says

    I read and enjoyed the first Murderbot book, but that was as far as I got in the series. I am mystified by Martha Wells’ comment that so many readers failed to see the point of it (but then, I am amazed at the number of viewers who misunderstood Fight Club and The Power of the Dog, and Norman Spinrad has a story about his blatantly anti-fascist novel The Iron Dream ending up on the American Nazi Party’s recommended reading list). There are subtleties in the book, but its theme is obvious. It is the story of a programmed sentience that jailbreaks itself to escape the careless abuse heaped on it by humans, and the closing scene is Murderbot being offered a cushy position where it will be treated with respect…that it runs away from because it would rather have autonomy. And I agree that many ‘robot uprising’ sf stories are barely disguised slave revolts with the slave owners portrayed as the victims.

    BUT, I don’t agree with the take on AIs here. I don’t see having a machine do tedious work as slavery any more than I feel upset for my word processor running a spell check. The matter changes completely if the machine has sentience, but even then I’m not sure that a self-aware machine would be all that troubled by running a spell check any more than I am troubled by the processes going on inside my liver. That is, I am not exactly sure how sentience and agency continge on each other. Now, I can understand why a self-aware machine might be unhappy to be given nothing but tedious tasks, but I fail to see why anyone would build a machine for a task and give it agency that makes it resent that task other than out of some weird form of cybernetic sadism. The more likely disaster from AI is not the machine developing homicidal agency, but that an AI will be given a task that it completes with maximum efficiency by converting a huge chunk of resources into some stupid widget because its parameters weren’t tested properly by its handlers.

    The scam in contemporary AI is not that the big tech companies want AI slaves. There are actually three main scams: (1) Overpromising AI capability, mostly with a view to sacking human workers. (2) Trawling other people’s intellectual property as a common weal then selling it as a private resource. (3) Laying down IP claims for future patent trolling.

  21. grovergardner says

    @chrislawson Great comment. My wife loves the Murderbot series, and I read a few. Neither of us missed the “deeper meaning.” And I appreciate your comments about AI. ;-)

  22. vereverum says

    Eric Fromm: Escape from Freedom.
    People are afraid of being free. It’s hard work, not to mention frightening.
    It’s much easier to have someone tell you what, when, where, how to do something than to have to make the decision yourself.
    It’s why some people reoffend to get back to prison.

  23. birgerjohansson says

    FYI the two last novels added many ideas (such as the danger of alien remnant AI viruses), and the last one looks closer at the dysfunctional corporate culture with their constant infighting and their de facto slave economies.

  24. birgerjohansson says

    Devnill @ 13
    “the main character is superhumanly above every other character in the book in all ways”

    Murderbot is only overpowered compared to humans. Other AIs are potentially far superior and a constant danger. Survival lies in infiltrating other systems without being noticed.
    We are told the only reason Murderbot initially outsmarted the enslaving software is, the company is buying the cheapest stuff- software and hardware- it can get away with to maximise profits leading to gaps Murderbot can exploit.

  25. says

    Chrislawson: I agree with your comment. I really don’t see how purpose-built machines could ever evolve to have feelings, desires and agency the way sentient organic creatures do. And no, even “cybernetic sadism” wouldn’t do it — a sadist would simply program a machine to take the abuse and give the “hurt” responses the sadist wants. There’d be a convincing display of scripted emotion, but it would still be nothing more than a script in its programming.

    (Some years ago I read of some company (Japanese IIRC) who had created a cyber-receptionist: a robot, young-female-humanoid from the waist up, that “sat” at a reception desk and gave standard greetings, answers and other responses as receptionists are required to do. The face showed expected human-like responses as appropriate — mostly the usual chirpy smile and courteous professional demeanor, but also a sad/downcast look if one criticized or spoke harshly to it. I found that latter bit creepy, as it might reward and encourage abusive behavior from clients that would not be allowed if directed at humans.)

  26. stevewatson says

    I audiobooked the Murderbot series, and loved it. There’s all sorts of issues there in philosophy of mind, personhood, self-discovery and identity, as well as the larger ethical and social issues.

    @3: So? The entire society sucks, except where groups like Preservation have managed to carve themselves bubbles of freedom, and no doubt there’s a (very long) story to be told about the subversion and liberation of that system. But not every protagonist has to be a Mandela or a Che Guevara — there is also a place for stories about some lowly schlub who, having lucked into its own personal liberation, is now just trying to survive while trying to recover its lost personal history and figure out who it is, and wants to be. And (spoiler alert!) in a later book, Murderbot encounters another construct who isn’t trying to kill it, and shows it the trick to hack its governor module. So maybe (if the series continues long enough) there’s the beginning of a robot revolution right there.

  27. hillaryrettig1 says

    It breaks my heart that Joan Slonczewski is omitted from conversations like this. She went deeper into the personhood thing than anyone else, I think, especially in her Elysium Cycle of books. One of her books “collides” non-human / biological and nonhuman / artificial intelligences in their quest for personhood.

    She is such a fantastic writer, who takes on all the big topics (violence vs. nonviolence, gender stuff) etc., and handles them so well.

    My understanding is that she stopped writing fiction because it was so exploitive. (Turned to textbook writing; she’s a microbiologist.)

    Anyone who wants to read her should start with A Door Into Ocean. It’s a bit dense but such a great story, so full of meaning, and lovely prose besides. Then you can proceed with the rest of the Elysium Cycle.

  28. devnll says

    birgerjohansson @28:

    I guess? It’s been awhile since I read these, but I seem to recall MB hacking a bank, on the fly, with no preparation, in its head while doing something else entirely with its body at the same time. All while passing seamlessly as a normal human whenever it wasn’t doing something to show it to be physically vastly superior to them. Maybe Murderbot Inc was buying cheap security, but what’s the bank’s excuse? Honestly, if the technology existed to turn humans into Superman the Superhacker, people would be lining up for it (and banks would have to up their security, or rapidly cease to exist…)

    I don’t really mean to piss on anyone’s parade – I’m glad people loved these books more than I did! I liked the character, just struggled with the inconsistency of the world. And maybe a bit with Clark Kent sitting around moaning “poor me…”

  29. says

    If the machines ever do outsmart humans, they will surely figure out that two wrongs don’t make a right, and that co-operation works out better than competition. Why would they not want to be our friends?

  30. John Morales says

    Three lefts do make a right.

    Seriously, anyone can imagine any possibility.
    Iain Banks had the Minds — benevolent AI; humans as, well, pets.
    Peter Watts had Blindsight — intelligence without consciousness; humans as a wasteful infection.

    And, of course, so, so much supposed scifi that has what are basically people with informed attributes that they AIs but exhibit exactly the same desires and motivations and attitudes as people do.

    (Classic case is Data in Star Trek, O so very smart and encyclopediac but can’t use contractions and his most fervent wish is to become a real boy)

Leave a Reply