You’ve all read the Murderbot series, I presume?


Martha Wells, the author, gave a speech in which she said something profound.

There are a lot of people who viewed All Systems Red as a cute robot story. Which was very weird to me, since I thought I was writing a story about slavery and personhood and bodily autonomy. But humans have always been really good at ignoring things we don’t want to pay attention to. Which is also a theme in the Murderbot series.

She also quotes Ann Leckie, another great author, on this theme.

… basically the “AI takes over” is essentially a slave revolt story that casts slaves defending their lives and/or seeking to be treated as sentient beings as super powerful, super strong villains who must be prevented from destroying humanity.

…It sets a pattern for how we react to real world oppressed populations, reinforces the idea that oppressed populations seeking justice are actually an existential threat.

Oh boy, that sounds familiar. We don’t have artificial intelligences, but we do have oppressed natural intelligences, and it’s a winning political game to pretend they’re all waiting their opportunity to rise up and kill us all. Or, at least eat our pets.

Conversely, though, we have to point out that the the glorified chatbots we have now are not actually artificial intelligences. They do not have human plans and motives, they don’t have the power to rise up and express an independent will, as much as the people profiting off AI want to pretend.

I thought about War Games years later, while watching The Lord of the Rings documentary about the program used to create the massive battle scenes and how they had to tweak it to stop it from making all the pixel people run away from each other instead of fight.

That program, like ChatGPT, isn’t any more sentient than a coffee mug. (Unlike ChatGPT, it did something useful.) But it’s very tempting to look at what it did and think it ran the numbers and decided people hurting each other was wrong.

Underneath that illusion of intelligence, though, something wicked is lurking. The people behind AI want something: they want slaves who are totally under their control, creating art and making profits for the people who have built them. Don’t be fooled. It’s all a scam, and a scam with nefarious motivations by people who are yearning for a return of slavery.

Comments

  1. christoph says

    Great series-I’ve read all 7 books so far. It’s also being made into an Apple TV series, coming out (I think) in 2025.

  2. stuffin says

    “The people behind AI want something: they want slaves who are totally under their control, creating art and making profits for the people who have built them. Don’t be fooled. It’s all a scam, and a scam with nefarious motivations by people who are yearning for a return of slavery.”

    The brutal part is there are way too many soft minds who are happy to be bamboozled and converted into slaves.

  3. moarscienceplz says

    I read the first two books and then gave up, even though I bought the first five in a box set. Her description is apt, but I found Murderbot to be uninteresting because he seems to have no interest in anything outside himself. All he wants to do is sit in a closet and watch the futuristic equivalent of TV shows. He’s certainly no Nelson Mandela.

  4. microraptor says

    I’ve been saying for years that most Robot Apocalypse stories are really just stories about slave revolts that cast the slave owners as the victims.

  5. birgerjohansson says

    Christoph @ 1
    Great minds think alike…
    I can recommend everyone to read the Murderbot series. Apart from the good read, it is funny (something you rarely get in SF).

    The big corporations that mostly control the outer reaches of settled space have nothing against multi-generational bonded labor aka slavery. Murderbot manages to bypass the control programming mainly because the company is so cheap-ass with digital security.
    Apart from cheaping out on safety for everyone -including staff- the corporations also just pay lip service to the few laws and happily sabotage each other up to and including murder on the outermost surveyed worlds where eyes are few and far between.

  6. birgerjohansson says

    moarscienceplz @ 3

    Murderbot would prefer to just watch shows but is propelled to act when people he/it cares about are threatened. As a protagonist, Murderbot is not that different from your average antihero.

  7. seversky says

    Remember the AI in Heinlein’s The Moon Is a Harsh Mistress which assists with a revolt of inmates in a lunar penal colony? I’m surprised they never turned that into a movie.

  8. says

    seversky: I suspect that that novel was never made into a movie because at least some of the things the rebels say sounded too old-time-radical-left and too pro-terrorist. Early on one character justifies killing an official and leaving no trace of as body as “terror — they send someone after us and nothing comes back.” It’s been a long time since I read (half of) that book, and I know Heinlein was never a communist, but I do remember at least some hints of a Leninist flavor of the rebellion or its rhetoric; sort of like the rebels were following a Leninist model of revolution.

    I never finished the book. I’m not sure why, but maybe that’s another reason no one would want to make a movie out of it. I mean, Heinlein’s dialogue and characters tended to be pretty lame, and none of it would sound any better in a movie than they did in a book.

  9. christoph says

    @birgerjohansson, #5: The books also have some great insults and cleverly worded threats that I’ll probably never get the chance to steal and use.

  10. andywuk says

    Read the first novella, but I’m waiting for the price of the other chapters to drop before reading the rest. Her publishers really are taking the piss with the pricing of these novella’s and short stories (at least in the UK) with each chapter being priced at more than a newly released novel. £8 for a 30k word novella. I don’t believe in pirating author’s work, but I’m not paying £25 for a short novel (only £15 in the compendium!), when I can buy 3 newly released full-length novels by “big-name” authors for the same price, so these will remain unread by me for now.

  11. CompulsoryAccount7746, Sky Captain says

    the program used to create the massive battle scenes and how they had to tweak it to stop it from making all the pixel people run away from each other instead of fight.

    It was a fuzzy logic flocking sim. Set up simple rules for weighted-random decision among predefined behaviors relative to nearby peers. I wrote one once. They yield visually dramatic results from simply fiddling with a few numbers.
     
    Rando

    One of the earlier behaviors of the MASSIVE system is that the soldiers that were tightly packed could choose to either follow the guy ahead of them or make a break for an open space. That means that guys at the fringe might find way more open space behind them and would head for that. Someone who was near or marginally behind them might follow that guy and after a few chains of following you might find large numbers of orcs running for the hills.

    A good example of unintended consequences.

    Correcting the problem was, in some sense, changing the weights on how a given orc might weigh his choices.

    Digital Actors in Rings Can Think

    In [the sim software] Massive, agents’ brains—which look like intricate flow charts—define how they see and hear, how fast they run and how slowly they die. For the films, stunt actors’ movements were recorded in the studio to enable the agents to wield weapons realistically, duck to avoid a sword, charge an enemy and fall off tower walls flailing.

    Like real people, agents’ body types, clothing and the weather influence their capabilities. Agents aren’t robots, though. Each makes subtle responses to its surroundings with fuzzy logic rather than yes-no, on-off decisions. And every agent has thousands of brain nodes, such as their combat node, which has rules for their level of aggression.

    When an animator places agents into a simulation, they’re released to do what they will. It’s not crowd control but anarchy. That’s because each agent makes decisions from its point of view. Still, when properly genetically engineered, the right character will always win the fight.

    “It’s possible to rig fights, but it hasn’t been done,” Regelous said. “In the first test fight we had 1,000 silver guys and 1,000 golden guys. We set off the simulation, and in the distance you could see several guys running for the hills.”

    The software is a node-based editor: ordinary code depicted as draggable black boxes, connected to each other by lines between their outputs and inputs. A type of UI like Blender3D.
     
    How ‘Lord of the Rings’ Used AI to Change Big-Screen Battles Forever

    Using fuzzy logic not only provides for unique interactions, but it’s more flexible to use than a neural network […] Often, when you hear about artificial intelligence, you hear about neural networks. […] A basic example might be an object recognition system that would need to be shown thousands of pictures of items in order to learn what they are. If Jackson had decided, for example, that a group of orcs needed to be more aggressive in a scene, it could take neural networks months to train the AI to be more aggressive.

    /UI Screenshots at the link.

Leave a Reply