And everyone gets a robot pony!


Oy, singularitarians. Chris Hallquist has a post up about the brain uploading problem — every time I see this kind of discussion, I cringe at the simple-minded naivete that’s always on display. Here’s all we have to do to upload a brain, for instance:

The version of the uploading idea: take a preserved dead brain, slice it into very thin slices, scan the slices, and build a computer simulation of the entire brain.

If this process manages to give you a sufficiently accurate simulation

It won’t. It can’t.

I read the paper he recommended: it’s by a couple of philosophers. All we have to do is slice a brain up thin and “scan” it with sufficient resolution, and then we can just build a model of the brain.

I’ve worked with tiny little zebrafish brains, things a few hundred microns long on one axis, and I’ve done lots of EM work on them. You can’t fix them into a state resembling life very accurately: even with chemical perfusion with strong aldehyedes of small tissue specimens that takes hundreds of milliseconds, you get degenerative changes. There’s a technique where you slam the specimen into a block cooled to liquid helium temperatures — even there you get variation in preservation, it still takes 0.1ms to cryofix the tissue, and what they’re interested in preserving is cell states in a single cell layer, not whole multi-layered tissues. With the most elaborate and careful procedures, they report excellent fixation within 5 microns of the surface, and disruption of the tissue by ice crystal formation within 20 microns. So even with the best techniques available now, we could possibly preserve the thinnest, outermost, single cell layer of your brain…but all the fine axons and dendrites that penetrate deeper? Forget those.

We don’t have a method to lock down the state of a 1.5kg brain. What you’re going to be recording is the dying brain, with cells spewing and collapsing and triggering apoptotic activity everywhere.

And that’s another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue? All the ones I know of involve chemically modifying the cells and proteins and fluid environment. Does anyone have a scanning technique that records a complete chemical breakdown of every complex component present?

I think they’re grossly underestimating the magnitude of the problem. We can’t even record the complete state of a single cell; we can’t model a nematode with a grand total of 959 cells. We can’t even start on this problem, and here are philosophers and computer scientists blithely turning an immense and physically intractable problem into an assumption.

And then going on to make more ludicrous statements…

Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.

I’m not anti-AI; I think we are going to make great advances in the future, and we’re going to learn all kinds of interesting things. But reverse-engineering something that is the product of almost 4 billion years of evolution, that has been tweaked and finessed in complex and incomprehensible ways, and that is dependent on activity at a sub-cellular level, by hacking it apart and taking pictures of it? Total bollocks.

If singularitarians were 19th century engineers, they’d be the ones talking about our glorious future of transportation by proposing to hack up horses and replace their muscles with hydraulics. Yes, that’s the future: steam-powered robot horses. And if we shovel more coal into their bellies, they’ll go faster!

Comments

  1. says

    Sigh. That is what happens to smart kids who don’t spend their youth with microscopes, scalpels, rock hammers, soldering irons, and oscilloscopes. It all seems so easy at the level of 10,000 feet, and so much more complex when you have had your hands a bit dirty.

  2. Cuttlefish says

    Throw in something like Schachter & Singer’s “misattribution of arousal” studies, and we find that a simple mechanistic view of “scan the brain, get the consciousness” leaves out the contributions of ongoing interaction with an active environment. Even if you could magically (and it would be magic) overcome the obstacles PZ lists and “perfectly” scan a brain, you now have the equivalent of one frame of a very long movie–much of the contributions to our consciousness and identity are the products of our environment, not stored within us. Even a “perfect” brain scan would be inadequate.

    I know this is not the view of some of the more prominent voices, but it does have the advantage of being right. Trust me, I’m a cuttlefish.

  3. naturalcynic says

    Will the wizz kidz claim that they can bypass the problem by only having to reclaim the really important stuff?? After all, most of what goes on in the brain is housekeeping and has nothing to do with the essence of thoughts and memories.
    And don’t worry about the mind and the soul. Those should be easy to find.

  4. 33lp says

    This also sounds like an explanation for Why we can’t have
    Star Trek transporters.

  5. Andrew G. says

    Now, I have no particular sympathy for the brain-uploading crowd (I think they’re massively underestimating the problem), but this criticism betrays an absolute ignorance of computer simulation:

    You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how?

    It’s actually HARDER to run the simulation at a speed constrained to match real time than it is to just let it freewheel at the speed determined by the computational requirements and the available CPU power. Since the interactions between all the subunits are themselves part of the simulation, this presents absolutely no problems.

    If you’re going to criticize the brain-uploaders for ignorance of biology, then it’s probably a good idea to avoid displaying an equivalent ignorance of programming.

  6. Cuttlefish says

    Wasn’t it only earlier this year, or last, when we were bombarded with the uber-cool news that our food preferences are influenced by our gut flora? We’ve known for some time of the effects on our behavior that the enteric nervous system has. The assumption that scanning a brain is enough is a throwback to Cartesian dualism–except that it is a de facto functional dualism rather than a substance dualism.

    People are conscious, aware, and thinking… as whole people. Functional dualists treat a very complex, active body as if it were nothing more than a meat puppet controlled by a magic brain. There is a reason the “brain in a jar” problem is a thought problem rather than an actual demonstration.

  7. ChasCPeterson says

    And of course the time-limiting step in meatware computation is not the velocity of action-potential propagation, but rather the ‘synaptic delay’ while neurotransmitters diffuse from one cell membrane to the next.

    But yeah. Let’s scan and reconstruct a working Aplysia brain first, as proof of concept.
    (oops, still too insanely complex)

  8. theophontes (坏蛋) says

    @ OP

    What you’re going to be recording is the dying brain, with cells spewing and collapsing and triggering apoptotic activity everywhere.

    Useless humans are, as electronic undead.

    At last, we have an insightful reason as to why Tardigrades will rule the cybernetic future. I, for one, suggest you all welcome your invertebrate oberlawds.

  9. says

    AI=Artificial Intelligence.

    Andrew G: No, you don’t understand. Part of this magical “scan” has to include vast amounts of data on the physics of the entity…pieces which will interact in complex ways with each other and the environment. Unless you’re also planning to build a vastly sped up model of the whole universe, you’re going to have a simulation of brain running very fast in a sensory deprivation tank.

    Or do you really think you can understand how the brain works in complete isolation from physiology, endocrinology, and sensation?

  10. kevinalexander says

    Even if you could upload a human brain, why would you want to considering the infinite ways they fuck up?

    I mean, wouldn’t you just get a device that fucks up even faster?

  11. Nerd of Redhead, Dances OM Trolls says

    Lets see, if the brain (3 kg) was imagined at the molecular level, the water (~77-78% of the brain) would be 7.7 e25 molecules to account for, not counting the other components of the brain. Not a trivial task.

  12. Matt Penfold says

    Andrew G: No, you don’t understand. Part of this magical “scan” has to include vast amounts of data on the physics of the entity…pieces which will interact in complex ways with each other and the environment. Unless you’re also planning to build a vastly sped up model of the whole universe, you’re going to have a simulation of brain running very fast in a sensory deprivation tank.

    Or kind of like thinking you can speed up a computer program by running on a faster processor when the bottleneck is not processor speed but how fast data can be fed to the program.

  13. says

    I think that if you’re ever going to get to the “upload immortality phase” (which IMHO is a bland consolation prize rather than actually truly beating death)* you’d have to first meet the tech half way and start engineering human brains to be more like computers.

    *The idea of uploading to me is not personally desirable, though it has benefits to leave as a legacy for others. It’s a copy, not an upload.

  14. says

    Would a better idea for “uploading” since we’re talking magic future science, be of a machine ‘infestation’? Basically you use nano machines to gradually convert the human brain from analog to digital during the person’s life time, replacing cells with nanite versions of cells.

  15. Andrew G. says

    @ Cuttlefish: given the assumption of enough computational power to simulate a brain, adding a simulated body is no big deal. Likewise a simulated environment.

  16. says

    given the assumption of enough computational power to simulate a brain, adding a simulated body is no big deal. Likewise a simulated environment.

    Citation definitely needed.

  17. says

    That’s just silly… everybody knows that ponies aren’t real, robot or otherwise!

    I’m not sure how people think we’re supposed to copy human brains by just doing stuff we can already do, just more.

  18. Ruth says

    Like Ing said, it’s not an ‘upload’, it’s a copy. Leaving aside the practical problems, how does being able to copy yourself lead to ‘immortality’ any more than cloning yourself would? The copy isn’t *you*. *You* are still *you*, and *you* will still die.

  19. jerthebarbarian says

    kevinalexander @14

    Even if you could upload a human brain, why would you want to considering the infinite ways they fuck up?

    Because you’re obsessed with immortality and think that a simulation of your brain that will be able to run forever (so long as someone is there to tend to the system) counts as living forever.

    And yeah – this would be a form of Artificial Intelligence. A form where a digital simulation of your brain runs on a computer. It’s not immortality except in the sense that cloning a perfect duplicate of yourself would be immortality.

    Let’s work on actually getting a good computational model of the brain working on the simulator, shall we? This is actually serious and important work that helps us figure out how the brain functions. And that’s more important than figuring out how to make a digital clone of a person, no matter how afraid of death they may be.

  20. says

    Like Ing said, it’s not an ‘upload’, it’s a copy. Leaving aside the practical problems, how does being able to copy yourself lead to ‘immortality’ any more than cloning yourself would? The copy isn’t *you*. *You* are still *you*, and *you* will still die.

    Only possible use I can see is to store the wisdom of the elders ala Superman’s Jor-El Floaty Head thing.

  21. remyporter says

    There seems to be this belief that we don’t actually need to know how the brain works, if we just make a copy within a sufficiently accurate physics model, the details will work themselves out without any intervention on our part.

    We can do some pretty impressive brain simulations with rat and cat brains running in gigantic super computers, but that’s a vast difference from uploading a rat or cat brain, and an even vaster difference from simulating a human brain in a computer.

    What many singularitarians tend to forget is that we already have computers capable of perfectly simulating human brains- they’re called human brains. Unfortunately, due to architectural constraints, we can’t separate the state from the hardware. Our brains are computationally similar to Universal Turing machines, but evolution didn’t pay any attention to Turing or VonNeumann. Even so, the easiest way to build a simulated brain that executes certain desired states is to have children and train them well.

    Also, the assumption that we can get away with just the brain is pretty insane too. The human body isn’t modularized quite like that. Huge amounts of what happens in your brain is controlled by your endocrine system. Even if we grant that you can simulate brain activity, that’s not the same as simulating a person.

  22. says

    It just seems that uploading is a bit of a short sighted scifi idea (yet ironically the proponents hilariously underestimate the time table for it). Why upload when you could work on conversion? Work on actually reinforcing the original to a more stable less fragile state rather than copying.

  23. anteprepro says

    I always thought that the idea of transferring a “mind” from an organic brain into digital format via TECHNOLOGY! sounded like a ridiculous, impractical idea. Glad to see my doubts supported by someone who knows more about brains than I do.

  24. NateHevens says

    Last week’s episode of Through the Wormhole with God Morgan Freeman was about whether or not we could cheat death, and this very idea was shown on the episode.

    Now, in general, I’m a pretty big fan of Through the Wormhole, but they sometimes get into the existence of God, a sixth sense, the supernatural, and stuff like this, and it’s… not so good (even Morgan himself has admitted that he doesn’t really believe in God, and that he’s a naturalist… though he didn’t use those exact words).

    The episode on cheating death talked about transplanting consciousness into robot bodies, this brain scanning you talked about, and so on. Not as good as an earlier episode about consciousness (“Who Are We Really?” or something like that), which talked about changing memories, recording dreams, and so on. The series’s next episode appears to be about evil.

  25. says

    It’s actually HARDER to run the simulation at a speed constrained to match real time than it is to just let it freewheel at the speed determined by the computational requirements and the available CPU power.

    Unless there are built-in timing assumptions – then you have race conditions all over the place. I’d expect that humans have all kinds of cool delays built into us, such as for muscular activity to commence after it’s been given the order to… I was involved in porting an SMP UNIX kernel from slower hardware to a faster version, once, back in the early 80s, and there’s a lot of gotchas. If the system assumes that a hard drive will be ready in N ticks but now suddenly it’s 100N or 1/100N you can miss an interrupt and not have the network controller correctly wake up, etc.

    Another interesting possibility is that our “consciousness” is evolved on top of what amounts to a great big status-monitoring executive that tracks the state of the various components and subsystems of our bodies, analyzes our position/attitude in space, etc. If we simply “upload” the brain into a software construct “consciousness” might crash and burn because those subsystems are sending the wrong or absent signals. We imagine simulating them – so perhaps the CPU fan controller would send a signal claiming to be a “heartbeat” so the consciousness was happy. Otherwise, the consciousness might just be screaming “OMG! MY HEART STOPPED!” and be too busy with that to think beautiful thoughts about Kurzweil-heaven. But in order to adequately simulate the inputs from those sensors we’d need to feed them what amounted to a false reality (otherwise, being a brain in a box would be disconcerting) – this is not a minor problem! What if the software brain immediately suffered excruciating pain from “phantom body” or a variety of “phantom limbs”?

  26. says

    @PZ @12:

    Questions from a naive astronomer:

    Are you saying that in addition to the magic scanning technique, you’d need either to scan the entire body and also construct a detailed virtual environment, or understand how the brain works so well that you might as well just build a separate AI from scratch?

    Re. the insanity of scanning the entire brain molecule by molecule: even if this were possible, how long would it take? No fixation method can be perfect, so the scan wouldn’t even be recording one state of the brain – whichever part of the brain they record last will presumably be the most degraded, but how does a nervous system fixed in (say) liquid helium decay?

    I’m reminded of what little I know of cryonics, where the person to be frozen is pumped full of enough formamide and ethylene glycol (among other cryoprotectants) to kill them if they weren’t already dead. The assumption is made that somebody will be able to revive them from all of that without destroying all of the information in their brain in the process. Is that not quite so insane as scanning the brain molecule-by-molecule, but still impossible?

  27. says

    What if the software brain immediately suffered excruciating pain from “phantom body” or a variety of “phantom limbs”?

    Interestingly,(for omse of you) the New Doctor Who actually used this idea for the new Cybermen. They’re brain in jar type cyborgs wand without some of the brain disabled (emotions) the feeling of being in such a state is so overwhelming that most commit suicide ala head asplode

  28. says

    BTW, without a reality to be in, being a software brain in a box isn’t a whole lot of fun at all. So, while they’re at it, they’d need to figure out a good reality simulator and all the underlying device-mappings between that simulation and the software brain’s evolved-in ways of dealing with reality. So, let’s say you have a space of 3d textured geometries for the software brain to “live” in: you’d need to either change the brain’s notion of what “seeing” is (inventing thereby “sight 2.0”) or you’d need to convert the 3-space to something that could be consumed by “sight 1.0” and feed the trans-coded results into the software brain’s inputs. Otherwise, it’s “enjoy your eternal blackness, ‘cuz it’s very black and very eternal.”

  29. Andrew G. says

    Andrew G: No, you don’t understand. Part of this magical “scan” has to include vast amounts of data on the physics of the entity…pieces which will interact in complex ways with each other and the environment. Unless you’re also planning to build a vastly sped up model of the whole universe, you’re going to have a simulation of brain running very fast in a sensory deprivation tank.

    Or do you really think you can understand how the brain works in complete isolation from physiology, endocrinology, and sensation?

    You don’t need a vastly sped up model of the whole universe, duh. (Unless you’re one of the quantum-consciousness-woo believers of course.)

    Yes, your simulation has to take into account physiology (and sensation if you don’t want your simulated brain to go nuts in short order) but these problems are not fundamentally more complex than simulating the brain itself (because there are fewer connections) and thus don’t present any additional barriers. Plus, as you go outwards the problems become substantially easier – for example, unless you actually choose to provide your simulated human with a simulated physics lab, you don’t need anything more complex than classical mechanics to provide them with an environment good enough to keep them sane.

    It’s a valid criticism of brain-uploading that doing just the brain isn’t enough (you also need a body); but the idea that once you have a working simulation it is somehow constrained to run only in real time is preposterous.

  30. says

    So, let’s say you have a space of 3d textured geometries for the software brain to “live” in: you’d need to either change the brain’s notion of what “seeing” is (inventing thereby “sight 2.0″)

    Ugh…and I bet the developers of that virtual world would still make the mistake of making everything brown and exploding with bloom

  31. says

    but these problems are not fundamentally more complex than simulating the brain itself (because there are fewer connections) and thus don’t present any additional barriers.

    You realize I just LOLed at this right?

  32. Trebuchet says

    Once you’ve uploaded the personality, you can give it a holographic body. Just be sure to put a chrome “H” on the forehead so people can tell.

  33. says

    It’s a valid criticism of brain-uploading that doing just the brain isn’t enough (you also need a body); but the idea that once you have a working simulation it is somehow constrained to run only in real time is preposterous.

    All you’ve convinced me is that these transhumanists shouldn’t be let anywhere near a modeled mind even if we get it to work because yawl will wind up being increasingly oblivious and cruel to it as you stumble around the stimulus question. It’d be like watching clueless dolts motorize a hamster’s exercise wheel to ‘help’ the poor little bugger exercise.

  34. Antiochus Epiphanes says

    A little late, geniuses. Kim Jung Il perfected that technology in the early aughts and now lives a divided life alternatively within Tiger Woods PGA TOUR® Online and the latest iteration of World of Warcraft. His latest transmission to the meat-people is thus: “Eat my divots, skull-noo8s!”

  35. says

    There’s nothing particularly “philosophical” about the ideas of the singularians. In fact, one of the most vehement and long-time critics of the AI dream of the mind recreated in silicon is Hurbert Dreyfus, whose inspiration is the philosophy of Martin Heidegger, who like Cuttlefish, emphasized that consciousness is always and necessarily embodied.

    You can certainly investigate the material properties of the nervous system and that information is highly relevant to figuring us out. Relevant but never sufficient. Mental activity doesn’t occur in the brain in a simple sense, not because thought is made out of some mysterious mind stuff but because absent a world it really is just chemistry and electrical impulses. Even if you could get around the practical obstacles to duplicating or completely understanding the nervous system as a physical object, you wouldn’t understand human mental life for the same reason that you can’t do literary criticism by doing an autopsy on an author’s typewriter.

  36. NateHevens says

    For the record, I would desperately love something that can record my thoughts and dreams and shit. I have an overactive mind, sometimes so bad I can’t shut it off and get to sleep. I would love something that I could just plug into my head and load all that crap I’m thinking onto a hard drive. It has the potential of making my life a lot easier…

  37. Antiochus Epiphanes says

    For the record, I would desperately love something that can record my thoughts and dreams and shit.

    It would be cool if we had the technology to transmit our thoughts to others.

  38. Andrew G. says

    It would be cool if we had the technology to transmit our thoughts to others.

    That’s almost certainly a vastly harder problem than “brain uploading” would be.

  39. unbound says

    Alright, this is false advertising. Where were the instructions to get my robot pony?

  40. pieris says

    The cerebral cortex has something like 10 billion neurons, each with 30,000 connections to other neurons. It’s not just the number of neurons, which is enormous, but the amount of connectivity that underlies human consciousness. That, of course, is just the structural complexity. The way a neuron functions must also be taken into account. It’s true that a neuron is digital in a sense, resembling a transistor in a computer with two states, either generating an action potential (1) or resting (0). However, the state of a transistor is controlled by a single input voltage, while a cerebral neuron sums up thousands of inhibitory and excitatory synaptic inputs over time. Only when the sum exceeds a certain level of depolarization will the neuron transmit an action potential. Now imagine 10 billion neurons functioning in concert and responding not just to the internal activity we call consciousness but also to external stimuli through sensory input, and you can better understand pz’s assertion that the Singulatarians have grossly underestimated the magnitude of the problem.

  41. ChasCPeterson says

    let’s perfect brains-in-vats technology before trying the slice-and-replicate thing, eh?

  42. paulfchristiano says

    In fairness, Anders (author of the academic text under discussion) is a PhD in neuroscience, not philosophy. No doubt he knows much less biology than you, but he has at least thought about this particular problem a lot. Note also that the timescales being discussed here are very long.

    I agree that discussion about technology often simplifies things to an offensive extent, but this post isn’t an exception. For example, you can probably see why an intelligent observer might think we don’t need to preserve much epigenetic or chemical information, or more generally why preserving a brain might be easier than preserving a detailed picture of a single cell. So why argue against wacky straw men, rather than talking about the actual disagreement, if you want to talk about anything at all? (Which seems implicitly to be about the level of detail necessary to capture the important parts of a brain.)

    If you have a simulation, you change the speed by running it faster. This always preserves functionality, and always makes it go faster. The point of discussing characteristic timescales of the phenomena being simulated is to understand how expensive a simulation is likely to be. Again, there is probably some real disagreement here about how hard the system is to simulate, but none of the things you have written are very compelling on that point, and I expect if you thought about it more you’d end up on basically the same page.

  43. says

    You know, when your argument for brain upload was refuted over a decade ago by a Star Trek technical manual explaining why their magic replicators can’t produce living things, you really need to rethink your thesis.

  44. Cuttlefish says

    given the assumption of enough computational power to simulate a brain, adding a simulated body is no big deal. Likewise a simulated environment.

    Two thoughts. First, the unsimulated environment includes people with functioning brains themselves. Given how much difficulty physicists continue to have with the three-body problem, making those three bodies actual thinking humans is not going to make it easier.

    Secondly… while you are simulating a body, can you maybe start with a functional pancreas for my son? The ease of simulating a body, it seems to me, depends on how close an approximation to the real thing you want to make.

    There was a reason the behaviorists studied learning in an operant chamber, or chemists use clean test tubes: the real world is complex, messy, and dirty. You say “no big deal”; I sense more hand-waving than I can achieve with all tentacles going.

  45. Nerd of Redhead, Dances OM Trolls says

    If you have a simulation, you change the speed by running it faster.

    That implies you know the brain states to the molecular level. You don’t. And that is where the problem lies for biologists and chemists, as so easily ignore. I calculated 7.7 e25 molecules of water in the brain. You need to know the exact positions of the atoms therein, and their orientation, along with the molecular orientation. Same is true for the other atoms and molecules present in the brain. And you need to be able to scan it in a reasonable length of time. Forget the simulation, look at what is necessary for the simulation to be accurate.

  46. says

    Cuttlefish:

    Wasn’t it only earlier this year, or last, when we were bombarded with the uber-cool news that our food preferences are influenced by our gut flora? .. People are conscious, aware, and thinking… as whole people.

    That’s true, but can be overstated. We recognize someone as the same person, even if they lose a large part of their gut, to disease and surgery. Even if their taste for food changes. And to your previous point, we recognize them as the same person even if injury causes a coma and interrupts the “movie” of consciousness. Including retrograde amnesia. Which is quite common.

    I agree with PZ about the difficulty of uploading or simulating a brain. But if we could, I think it would lead to precisely the bundle of sci-fi issues around person replication because it’s those habits and trail of memories and associations that we recognize as person.

  47. colonelzen says

    Re Cuttlefish @ 3

    I’m a big believer that in the next ~20 years we will have AI’s of human level (and soon better) derived in part from *functional* schemata. But for all the reasons PZ lists and a few others I am more than skeptical that physical scanning of brains on large scale will be useful.

    But …

    (According to functionalist models of mind and consciousness – mostly my contemplations from Dennett with a little bit of Edelman and others thrown in, otherwise there being no general consensus in philosophy or science other than the large consensus of materialism among scientists …)

    The past experience including environmental and social interaction is almost certainly needed to *generate* consciousness at our level. It is not needed – in the short term at least -to maintain consciousness. The past history of the environmental interaction which (contributes and is necessary) to generate consciousness is indeed stored – physically – in the brain. The brain is constantly growing, changing, and rewiring based upon new experiences.

    Given a (likely never possible, and quite possibly theoretically impossible given the size and number of some synaptic gaps and the prevalence of quantum effects at that scale) perfect scan and complete knowledge of how to use it, and the requisite computational power, yes a simulation could be run and such “brain” interrogated about the subject, its likes and preferences desires and intentions (from when still biotic. And this assumes knowledge of an interface which can appropriately mimic the real biological sense interactions to do such interrogation). But I suspect that without the environmental interactions, including most particularly the ongoing muscular/kinesthetic which I suspect is our most profound foundation of identity, I very strongly suspect said brain would deteriorate psychologically (even with perfect emulation of the cellular behavior!) very rapidly.

    Possibly all of the sense interactions including kinesthia, and environmental, even social, could in theory at some point (if we mapped and understood all nerve connections through the spine into brain out to physical body) be likewise simulated, keeping said simulated brain psychologically “sane” indefinitely, but why?

    At that point you have not a person who was once alive but a simulation “living” in a complex video game. Sure, the chance and with no other option I’d take it … but I understand the reality that me now is not me of ten minutes ago; there is just memory and physical continuity connecting us … and this simulation would have that same memory but once “off” but the ethics should be no different than for an *entirely* artificial intelligence – that would undoubtedly be more intelligent and have richer memories – it isn’t burdened with emulating blobs of protoplasm.

    — TWZ

  48. says

    but these problems are not fundamentally more complex than simulating the brain itself (because there are fewer connections) and thus don’t present any additional barriers.

    Because simulating reality accurately enough to fool a brain (of which brains are a subset) is a smaller problem than simulating that subset?

    I admit I do find the idea of experimenting with the physics model in sim-land would be fun. Could you do DOOM-style rocket jumps? Or find errors in the geometry clipping that let you sneak behind the wireframes and rummage around in core memory? Back in the days when I was coding MUDs I used to argue that “magic is just getting access to the underlying rules of your simulation’s object model” so, yeah, you could experience what it’s like to be “god” in a very small universe.

    Whenever I hear someone talk about transplanting humans into a radically different environment I think “uh-oh” – because there’s all this little stuff that might get overlooked. Would our simulated bodies not feel right if we didn’t have simulated intestinal bacteria, too? Would we be able to be fully human without having “knees” to fall down and skin on concrete?

    The religious books of godbotherers have these ideas of “paradise” where the streets are paved with gold and everyone drives their 47 virgins around in their Lexus, but aren’t those images as naive and simplistic as the “uploader’s paradise”? Wow, we could upload to a virtual reality – and what if we found out that our consciousness (which is evolved as an emergent property of brains that exist in reality) interprets that “paradise” as pointless pixels? I wouldn’t want to be god in a simulated reality if I knew it was simulated. In fact, the most upsetting thing I’d imagine about being a brain in a simulator would be the constant nagging awareness that I was dead.

  49. says

    Put another way: all of the things I recognize as “the good parts” of being alive involve that I’m made of meat.

    Unless the uploaders can solve that, then they’re offering a ghostly existence that’s as interesting as being one of the extra characters in Metal Gear Solid or a simulated rendered orc in LOTR.

  50. carlie says

    Wasn’t it only earlier this year, or last, when we were bombarded with the uber-cool news that our food preferences are influenced by our gut flora?

    And it was recently in the news that mood can be influenced by gut flora, too. Science Friday story

  51. prae says

    I still don’t understand why biologists insist that you have to do a perfect simulation, down to the smallest molecule, and then state the obvious fact that it’s not going to happen.

    I think the first step in mind uploading would be to figure out the behaviour of the relevant cells and create a model of that behavior which is “good enough”, and use that for the simulation. If you make a NES emulator, you don’t go simulate the electric fields in it’s transistor, you simulate the CPU’s function.
    Well, if you really can’t find a simplified model and have to do it on a molecule-basis, then it is pretty much impossible. But starting with that assumption without even trying?

    The other thing I do not understand is why some people assume that the simulation has to run isolated, without input or output. Seriously, what would be the point of this?
    I think most people would want to continue to live in the world they used to, so of course they would need a new body, which must be able to gather sensory information and feed it to the simulated brain, and obey to the motoric orders coming from it. You might also need to send some fake status signals like “yes, the heart is still beating”, but i assume such things will be discovered by trial and error.

    One of the few points I agree on is that a copy of my dead brain is not me. If I can’t get uploaded while my brain is still running, there is no point in attepmting it.

  52. Cuttlefish says

    colonelzen@#58

    @ The past history of the environmental interaction which (contributes and is necessary) to generate consciousness is indeed stored – physically – in the brain.

    You will find tremendous agreement with that statement from much of the scientific community, but not from all, and not from me. Within the assumptions of a mechanistic philosophical stance you are safe, but from a contextualist stance the notion is gibberish–consciousness is only defined in terms of interactions that are necessarily extended in time. Brain states may be a necessary component, but they are not sufficient to “be consciousness”.

    The question is, which philosophical model is appropriate for this task? IMNSHO, the mechanists have got this one very wrong.

  53. bromion says

    I once told an entire audience listening to a John Smart futurism lecture that he was basically full of shit (during the q&a — I let the guy finish!). That was satisfying.

    You have to understand that these futurists are playing science and engineering. They read popularized accounts of research and jump to the most convenient conclusion. It’s annoying, but it’s sad that so many people fall for their bunk. Ah well, no worse than Chopra, eh?

  54. Cuttlefish says

    prae@#62

    I still don’t understand why biologists insist that you have to do a perfect simulation, down to the smallest molecule, and then state the obvious fact that it’s not going to happen.

    It seems to me a bit of a reductionist fetish–part of what I interpret as that functional brain-body dualism. I agree that these elements may not be necessary to focus on, but I would go a step further and suggest that the entire question is being defined at the wrong level–the question of consciousness is one of psychology, of whole and interacting organisms, which is dependent on but not defined by the brain states of the organisms in question (let alone by the individual molecules).

  55. Azkyroth, Former Growing Toaster Oven says

    But yeah. Let’s scan and reconstruct a working Aplysia brain first, as proof of concept.
    (oops, still too insanely complex)

    …creationist brains?

  56. says

    If you have a simulation, you change the speed by running it faster. This always preserves functionality, and always makes it go faster.

    Do you know why engines have “red line”s? Because not everything can just go infinitely fast. Your simulation is going to run into race conditions that’ll cause it to crash unless you remove them.

    Here’s a thought-experiment:

    Suppose you build a nearly perfect simulation of a V-8 engine. It simulates everything – heat exchange, valve timing, the bearings and crank, etc. I said “nearly” to allow some wiggle-room because a perfect simulation of an engine would be “a real engine.” Now, you speed it up. If it’s a nearly perfect simulation of a V-8 engine, it’ll blow up if you run it for a while at 10,000rpm. Because a nearly perfect simulation will also simulate that the lubrication system won’t be able to keep the bearings lubricated at that speed and – just like a real engine – it’ll blow up eventually. If you go in and tweak the simulation so that no longer happens, then it’s no longer a simulation of an engine at all – it’s now a model of an ideal V-8 engine operating in a non-real environment in which there are perfect bearings that don’t need lubrication, or where there is lubricant that never suffers viscosity breakdown and which magically teleports to where it’s needed. In other words you’d need to have a nearly perfect simulation of a V-8 engine, that was “fine tuned” if you will to operate at higher speeds than it could in physical reality.

    Consider a near-perfect simulation of a body in data-space. Now consider that the body relies on physical processes that might not also scale accurately if you speed up the clock.

  57. says

    I still don’t understand why biologists insist that you have to do a perfect simulation, down to the smallest molecule, and then state the obvious fact that it’s not going to happen

    Because that’s what is requested. Immortality via simulation! Anything less is just going to be a very nice effigy that is programed to trick onlookers into thinking its you.

    Frankly even a success would just be a nice effigy, but this is the difference between copy and knock off

  58. Robert M. says

    Of course a coal fired pneumatic pony is naive. This is totally different, really it is. Because of Moore’s Law and nano-stuff and graphs that show technological gains increasing forever, you can’t assume that progress will stop because of physical limits or anything.

    Laugh all you want, but my pony will have it’s brain uploaded so we can be together in digital heaven. Why are you all staring at me like that?

  59. says

    I still don’t understand why biologists insist that you have to do a perfect simulation, down to the smallest molecule, and then state the obvious fact that it’s not going to happen.

    Because otherwise where do you stop? Are you comfortable with the 8-bit 256-color pixelated Atari-800 version of you? It’s certainly not you.

  60. carlshulman says

    “I read the paper he recommended: it’s by a couple of philosophers.”

    The main author of that piece, Anders Sandberg, is trained as a neuroscientist, and it summarizes the results of a workshop of mainly neuroscientists and others in related technical fields.

    “You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits?… We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.”

    This seems to assume, contrary to the authors, running a brain model at increased speeds while connected to real-time inputs. For a brain model connected to inputs from a virtual environment, the model and the environment can be sped up by the same factor: running the exact same programs (brain model and environment) on a faster (serial speed) computer gets the same results faster. While real-time interaction with the outside would not be practicable at such speedup, the accelerated models could still exchange text, audio, and video files (and view them at high speed-up) with slower minds.

    And the brain looks embarrassingly parallel (signal speeds and spiking rates are orders of magnitude slower than electronic computer components) so that using more hardware should give speedups, as more cores let each core model fewer compartments faster.

    The paper discusses a range of possible levels of detail about a brain that might be required to effectively model it, including very fine-grained chemical state.

    “we can’t model a nematode with a grand total of 959 cells”

    The paper also emphasizes the weakness of current computational models and the need for massive progress in that area. They are not claiming that there will be robust brain emulations in 10 years, or that we have the ability to accurately model small brains and just need to scale that up. If we could create reliable emulations of C. Elegans’ nervous system that could control robotic worm-bodies indistinguishably from the real thing, we would have solved the biggest problems already.

    “If singularitarians were 19th century engineers, they’d be the ones talking about our glorious future of transportation by proposing to hack up horses and replace their muscles with hydraulics. Yes, that’s the future: steam-powered robot horses. And if we shovel more coal into their bellies, they’ll go faster!”

    Which singularitarians? The authors of the paper don’t claim that brain emulation will precede other forms of powerful general AI. Ray Kurzweil (who is most strongly associated with the term “singularitarian” in the public mind) only claims that insights from studying the brain will help to improve AI software, not that the brain will be slavishly copied. Perhaps the economist Robin Hanson?

  61. bromion says

    PZ is right in pointing out the ridiculous flaws in the brain-scanning scheme. On the entire topic of “consciousness uploading” however — we just really don’t know enough to even have a reasonable grounded conversation on the matter. Do we need to model the entire physical brain or will a different model suffice? Nobody knows. There may be some underlying science of consciousness that we haven’t yet developed that simplifies the situation, or makes the task even harder.

    One thing is clear: whole brain simulation is heinously complex. But even there, we don’t know the functional scale at which modeling would need to occur. Neurons? Signals? Neurotransmitters? Molecules? Subatomic particles? Rest assured, this is being researched by big boy (and girl!) scientists and engineers. Let’s wait to see what they find before we all buy saddles for our robot ponies.

  62. says

    I still don’t understand why biologists insist that you have to do a perfect simulation, down to the smallest molecule, and then state the obvious fact that it’s not going to happen.

    Errm, because that’s what the singularitarians we’re critiquing are proposing? This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.

    An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.

    But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule.

  63. ixchel, the jaguar goddess of midwifery and war ॐ says

    It may sound less likely than biological life extension, but just in case, we will have to hunt and kill the people who try to develop this technology, to ensure that we are not enslaved by demigods until the heat death of the universe.

  64. lucifer says

    If singularitarians were 19th century engineers, they’d be the ones talking about our glorious future of transportation by proposing to hack up horses and replace their muscles with hydraulics. Yes, that’s the future: steam-powered robot horses. And if we shovel more coal into their bellies, they’ll go faster!

    So if PZ was a 19th century observer of these 19th century engineers he would conclude that automobiles are technically impossible?

  65. Scientismist says

    I agree with Andrew G — once you “simulate” both a brain, with its consciousness, plus a body, plus an environment, you don’t need to keep the same clock speed. Of course, you have the problem of limits and edge conditions. You either have to simulate the entire universe, or decide where the “border” is, at which point you solve some timing, hysteresis, and impedance problems to solve, where the simulation interacts with the “real” world; but that’s probably not insuperable.

    My problem with the whole idea is, how much do you have to do to make it something beyond pointless? As you introduce ever more abstract simulations and substitutions (no hearbeat? simulate a good classic “lub-dub”; no intestinal flora? simulate that on some more abstract level than the lives of individual microbes.. etc..), the entire thing becomes more “artificial intelligence” than it is the capturing of a previous biological human mind. Why not use all that computer power to “evolve” an intelligent entity that can understand its status as such from the get-go?

    And, of course, “quantum-consciousness” will be part of any thorough simulation. You can leave quantum considerations out (and may have to), but they are definitely real (and not necessarily woo, though QM does lend itself to woo). Can you imagine simulating evolution without modeling the mutations caused by cosmic rays, or keto-enol shifts and quantum electron sharing in DNA bases? Such quantum-level chemistry is undoubtedly part of neuro-chemistry at the nitt-gritty level. Yea, you can get something that looks like the real thing — but why not be satisfied with the “Meet Mr. Lincoln” at Disneyland, if you’re not going to go all the way? Or a Gustav Klimt abstract painting? (“The Kiss” — See the Google Doodle for today, 14 July 2012.)

    I am reminded of the Star Trek Next Generation episode, where Data plays Holmes on the holodeck, and Moriarty figures out that he is himself a simulation. They end up keeping Moriarty “alive” as a simulation in a computer-cube that Picard keeps on his desk. In the movie “Generations,” when Picard’s Enterprise crashes, he saves his flute, from another NG episode, and Data rescues his cat, Spot; but, to my disappointment, there is no mention of Moriarty’s cube. I think such a cube is what all this is trending toward, with the same question as that posed by the fictional story: Is there a point to this, other than the emotional attachments of real evolved biological human beings? And the failure of the writers to include the cube in the crash aftermath scenes suggests that perhaps they didn’t think it really had a point.

  66. says

    Do we need to model the entire physical brain or will a different model suffice? Nobody knows. There may be some underlying science of consciousness that we haven’t yet developed that simplifies the situation, or makes the task even harder.

    A lot of fairly strong arguments have been posted in this thread, which belie the view that “nobody knows” – problems that are identified that have to be overcome are not “nobody knows” by any means.

    Certainly, we might not have to model the entire physical brain. But then we’d be talking about something different from “uploading” a person. Suppose there is some underlying science of consciousness that eventually allows us to create working artificial intelligences (whatever that is) that are indistinguishable from a human. Now, you take one of those AIs and call it “Marcus Ranum” – it’s not me. It’s an AI called “Marcus Ranum.” Then you take that AI and load a gigantic knowledge-base into it consisting of templates (perhaps extracted from my brain) for how Marcus Ranum would behave. It’s still not me. It’s just an AI that thinks its me. It might even be an AI that could fool you into thinking it’s me. But other than both of you being deluded into thinking it’s me, what makes it me?

  67. ixchel, the jaguar goddess of midwifery and war ॐ says

    Why would I want an upload?

    You have the good sense not to.

    The problem here though is that uploading doesn’t even need to create a conscious copy in order to reach what they’re apparently calling the 6a success criterion — “can have conversations and replace human workers” — which would make most biological humans (the ones who don’t own millions of simulated worker drones) into a permanent underclass.

  68. Loqi says

    Ugh. I worked on AIs in college, and I think we’re much better off constructing them ourselves. Building a real AI from scratch through better understanding of how the brain functions is a lot more realistic than “uploading.”

    Of course, that’s not really what these guys want because it doesn’t involve them getting copied, is it?

  69. bromion says

    Marcus — that is the “transporter” problem. Is a perfect or good-enough copy the same thing? If you hop in the transporter and are vaporized then reconstituted on the other side, are you dead and some other copy of you is running around thinking it’s you? What if it’s 99.99% accurate? What level of fidelity in the copy is necessary and at what scale? When it comes to consciousness, the answer is — we have no idea!

    As with the transporter problem, consciousness uploading suffers from this lack of understanding that we have about the very nature of consciousness. These thought experiments mainly demonstrate that we have a deep ignorance regarding the nature of consciousness itself.

  70. ixchel, the jaguar goddess of midwifery and war ॐ says

    You have the good sense not to.

    … and I forgot to finish.

    If this technology is possible, then unless you are careful to physically annihilate your brain when you die (and if you die late enough in history), then it will be possible to make millions or trillions of minds out of you, which will be used to keep the price of labor down so that most biological humans cannot compete in the economy.

  71. brett says

    Brain Uploading Research sounds like it would into a gigantic pile of ethical problems, particularly as you got closer and closer to simulating an actual human mind.

    1. If you turn the program off, are you doing the equivalent of drugging a human being into unconsciousness, only to wake them up when you need them?

    2. If you erase the saved state, did you just commit murder?

    3. What are the responsibilities of those providing computing power to run these simulations to the simulations themselves? Who pays to keep the programs “alive” if the original parties can no longer pay the bills, and no one wants to pay to keep the programs running?

    Tom Scott had fun with the third point in his spoof Welcome to Life video parody of uploading. People who don’t have any money get the cheap “advertisement-paid” version.

  72. Andrew G. says

    @ Marcus Ranum – your thought experiment is meaningless, because a 10x sped-up simulation of an engine running at 1000 rpm is not the same thing as a simulation of an engine running at 10000 rpm.

  73. says

    If you hop in the transporter and are vaporized then reconstituted on the other side, are you dead and some other copy of you is running around thinking it’s you?

    Yep. Or what if I simply use it to create multiple exact copies?

    Which, btw, introduces the James Tiberius Kirk paradox. Namely that Capt Kirk is indestructible (as we have seen), therefore whenever the transporter tries to dematerialize him he would, in fact, fool the transporter and escape. The transporter, then, would re-materialize Kirk, creating a perfect copy that would be also indestructible. Eventually the universe would fill with indestructible copies of James Tiberius Kirk – lock up your daughters!

  74. Hurinomyces bruxellensis says

    build a computer simulation of the entire brain.

    If this process manages to give you a sufficiently accurate simulation

    I would assume that “sufficiently accurate” would necessarily mean modeling protein neurotransmitter interactions in atomic resolution, unless these could be abstracted in some way. These sorts of calculations are actually being done now, on systems as large as one or several proteins in a big box of water, and there are some interesting things to note about them:

    1) Time is a problem – If I remember my computational chemistry course correctly, there is exactly one computer in the world powerful enough to run microsecond time scale molecular dynamics simulations on an atomic resolution protein system. This computer was built for this purpose, and it still takes weeks to run that simulation. If you had to run an entire brain at this level of resolution, you would need a computational colossus, and many lifetimes in order to simulate a single second.

    2) protein substrate and protein protein interactions can be made more manageable by analogizing molecular structure into pseudoatomic “beads” or simplifying the potentials that specify the intermolecular forces. These processes are used in techniques called “course grain” molecular dynamics. They can be useful, however they are much more prone to error than atomic resolution simulations, which already have to be carefully tuned in order to repeat experimental data.

    3) Molecular Dynamics simulations are sensitively dependent on initial conditions, and rounding errors lead to chaotic behavior. Simulating molecular systems is kind of like simulating the weather, which IMO suggests that strange artifacts might occur in a simulated brain over time as a result of rounding in molecular trajectories. Molecular dynamics are still useful to chemists and physicists, because statistical mechanics can be used to recover thermodynamic and kinetic quantities from a simulated system.

    If you already knew every protein-protein or protein-small molecule interaction in the brain, these issues might be circumvented by using some kind of abstracted non-molecular system. I really can’t see us knowing every signal transduction event or protein ligand or protein-protein interaction though, so I have a problem seeing that kind of alternative as a realistic one.

  75. Rip Steakface says

    Computer-based immortality isn’t possible within the next century. Before anything, we need to figure out quantum computers so that we can avoid the problem we’re about to run up against – the size limitation on a transistor. You can’t have monoatomic transistors. It just doesn’t work. Of course, quantum computers is a whole ‘nother problem altogether.

    Ing’s idea of cybernetic modification is the most likely one in the short term. It probably wouldn’t make us immortal, but it’d damn well increase our lifespan, and probably our ability to deal with injury (while fiction, Vamp from Metal Gear Solid is a good example).

    I see straight up AI being far easier. Not computer simulations of a brain, but brain computers. In 1995, it was a lot easier to just build a Super Nintendo than it was to emulate one. Right now, it’s a lot easier to just build a brain than it is to emulate one (which is what these guys are suggesting).

    Obviously, however, any brainbots have to be Three Laws-compliant. There’s no fucking way I’m dealing with Terminators.

  76. Nes says

    Marcus @67:

    I don’t think you understand what is meant by speeding up the simulation. It’s more like playing a movie at double speed. From the movie’s point of view, it’s still an hour and a half long, people speak normally, etc., but looking at the movie from the outside, it’s only 45 minutes long, everything goes by twice as fast, and people speak quickly in funny chipmunk voices.

    In other words, everything in the simulation, from the simulation’s perspective, is normal. It’s only when looking at it from outside that things appear sped up.

  77. says

    because a 10x sped-up simulation of an engine running at 1000 rpm is not the same thing as a simulation of an engine running at 10000 rpm.

    So you’re saying that the simulation of the oil would ignore the fact that it was sped up? Then it’s not a simulation of oil, anymore.

    I do see what you’re saying, but if a simulation of something depends on things that are time-critical, how can you call it an accurate simulation if it doesn’t correctly handle changes in the clock-speed of reality?

    I’m (obviously) not a physicist :/ Are there physical processes in humans (or engines) that are wouldn’t behave the same if you sped time up? I suppose not, come to think of it, if I’m acellerated to relativistic speeds my body or the V-8 engine is going to continue to work properly even though time for an observer is moving at a different rate. Hm. I apologize, I was wrong.

  78. Rip Steakface says

    One other thing – what happens when your brain emulator crashes? Bam, your uploaded brain is fucking dead and now you’ve died twice. It doesn’t matter that your brain emulator is running on Mac or on expertly crafted Linux, that shit will eventually crash because some guy forgot he commented out a line of code that was responsible for making EBrainSXE tell the uploaded brain it still has a heart beat.

  79. Rip Steakface says

    @Marcus Ranum

    Sorry, man, you’re still not getting it. You’re not increasing the speed of the engine, you’re increasing the speed at which the simulated reality of the engine runs. The example of playing a movie at double speed is a good one. From the movie’s “point of view” everything is just the same, but from the outside looking in, it’s playing double speed.

  80. chigau (女性) says

    If I am to be uploaded, I want to be uploaded into that battle-armor thing from District 9.
    Then I’m going to TAM.

  81. ixchel, the jaguar goddess of midwifery and war ॐ says

    nigel: creepy and funny. I liked.

  82. prae says

    @74 PZ Myers

    It seems there is a difference between me and the singularitarians, which I didn’t notice until now. I assumed the singularitarians are talking about the second approach, and wondered why everyone is talking about the first one.
    I guess I have to agree with everyone in here: That’s not going to happen.

    The second approach still sounds good to me, though. You will have to find a model which simplifies the processes so that you can still run it, but is still accurate enough so that the virtualized brain still contains “you”.

    Also@simulation speed: what that Chris Hallquist guy said would only work if you really know what you can do this. Otherwise, speeding up part of a simulation while keeping the rest slow is not a good idea. But of course you can speed up the entire thing. If you speed up your simulation of a V8 engine, the only difference will be having the results sooner, a warmer CPU and a higher electricity bill.
    You might indeed need special software so that the ac/decelerated input won’t drive you insane, though (I alerady see some people disabling it on purpose…)

    It would also be awesome to increase the clock rate while asleep, so that you can sleep for 8 hours while only wasting 4. Or decrease it when you have to wait, effectively fast-forwarding reality.

  83. ixchel, the jaguar goddess of midwifery and war ॐ says

    Obviously, however, any brainbots have to be Three Laws-compliant. There’s no fucking way I’m dealing with Terminators.

    You won’t have a choice in the matter, unless you stop people from making these beasts in the first place. Human law will mean nothing once they have the power: code is law.

  84. Usernames are smart says

    Yay, computers can do MAGIC!!!

    The version of the uploading idea: take a preserved dead brain, slice it into very thin slices, scan the slices, and build a computer simulation of the entire brain.

    Sorry, Hallquit, but your buddies’ idea won’t work. Here’s why it won’t work: their (and your) naiive understanding of computer functionalities and capabilities, possibly gleaned from watching fictional shows (e.g., CSI, Stargate, Independence Day, etc) and mistaking them for documentaries.

    Let’s do something extremely simple, instead:

    Take a preserved cell phone, slice it into very thin slices, scan the slices, and build a computer simulation of the entire phone.

    Question: what is the name, number, and avatar of the third entry in the address book?

    Thanks for playing. Next time, bring in someone who “knows computers” before you make a jackass of yourself.

  85. coyotenose says

    I’m just a layman, but isn’t part of the problem that a brain is only alive when time is advancing? It’s not a static entity that somehow starts running when all the required parts are snapped together. There’s continuous movement of matter and energy throughout it, and in and out of it, during that theoretical slice-and-scan technique. Wouldn’t this technique just result in a dead brain?

    Even if this simulated brain, or even complete body, were somehow created already in motion, it seems to me that a perfect simulation would actually make itself die pretty quickly in response to the lack of a dynamic environment around it. I imagine the first few thousand (million?) subjects asphyxiating in virtual space because the environment doesn’t exist or isn’t rendered properly, then the next bunch burning up because there’s no virtual heat dissipation, then come the deaths due to improper waste disposal. Hell, a simulation like this demands that any microorganisms present be simulated also, and I don’t see that being surmounted unless we’ve already pretty much conquered disease and infection at a molecular level.

    *wanders off into Watchmen territory and jabbers about Dr. Manhattan saying something about how a live body is no different structurally than a dead one.*

  86. ixchel, the jaguar goddess of midwifery and war ॐ says

    This is what I was referring to.

    I know, but there is no reason to think that the employers owners of simulated minds will be satisfied with “safe” minds — capitalism late feudalism exploits all available niches. Any owner who refrains from developing the most ruthless, amoral worker drones will be at a competitive disadvantage.

    Asimov’s laws are no hope for humanity. Taking them seriously will only lull us into complacency.

  87. Andrew G. says

    @ Rip Steakface: you do recall that Asimov himself presents at least one example of how the Three Laws are inadequate for the job, even if you make the (unjustified) assumption that they are even possible?

    (see “… That Thou Art Mindful Of Him”, collected in The Complete Robot and probably a whole lot of other places)

  88. gworroll says

    Slicing it up thin and scanning the brain?

    Would this actually preserve the information held in that brain? If not, I fail to see the point of this approach.

  89. Usernames are smart says

    Oh yeah, forgot: ask a physicist about why

    σ͓ σₚ ≥ ℏ ⋅ ½

    Means you can never accurately scan your “preserved brain.”

  90. badboybotanist says

    What would be the ultimate point of generating a simulated universe for our simulated brains to receive input from? Isn’t the whole point of the brain uploading process an effort towards immortality in our current, actual life? If your brain was only receiving falsely generated input from a simulated universe, even a perfectly simulated brain isn’t really “you” because you are no longer interacting with anything that actually makes you who you are.

  91. says

    “Asimov’s laws are no hope for humanity. Taking them seriously will only lull us into complacency.”

    And also, Asimov’s laws don’t work even hypothetically – the stories are mostly about the many loopholes in the laws, which is why they’re interesting stories.

  92. coyotenose says

    Rip Steakface @ #92,

    “You’re not increasing the speed of the engine, you’re increasing the speed at which the simulated reality of the engine runs. The example of playing a movie at double speed is a good one. From the movie’s “point of view” everything is just the same, but from the outside looking in, it’s playing double speed.

    This is why Super-Speed, in my opinion, is the second most horrifying super power to have (right after Immortality). We cannot imagine how dull Flash’s life must be, having to run everywhere in what to him is real time, while nothing at all seems to happen around him.

  93. says

    We cannot imagine how dull Flash’s life must be, having to run everywhere in what to him is real time, while nothing at all seems to happen around him.

    There was an episode of the old “UFO” tv series in which Cmdr Straker was sped up in time and wandering around the base with everyone frozen in space. At one point there was a guy using a table-saw and there was a cloud of sawdust frozen in the air. Pretty cool.

    So then I wondered what happens if the super sped-up Cmdr Straker walks into an insect that is ‘frozen’ in normal time, hanging in the air. Would it just poke a great big hole right through him? Or would it be like walking into a very small brick wall? Or… what?

  94. ixchel, the jaguar goddess of midwifery and war ॐ says

    Presumably if he can push through the air, he can push insects aside too.

  95. says

    Presumably if he can push through the air, he can push insects aside too.

    I guess the air’d have to be super-accelerated, too, or it wouldn’t be able to oxygenate his blood, which would be deoxygenating at super speed along with the rest of his metabolism.

  96. says

    Ing’s idea of cybernetic modification is the most likely one in the short term. It probably wouldn’t make us immortal, but it’d damn well increase our lifespan, and probably our ability to deal with injury

    I go even further and say that it’s by far preferable.

    As we see in this thread people are not oging to want to be just a mind in a computer, even in a simualted world. The stuff that’s fun and makes life worth having requires meat or meat substitute.

  97. opposablethumbs says

    Basically you use nano machines to gradually convert the human brain from analog to digital during the person’s life time, replacing cells with nanite versions of cells.

    Ing, that’s also a fantastic premise for a story!

    Alright, this is false advertising. Where were the instructions to get my robot pony?

    unbound, my keyboard thanks you for that (as do I)
    .
    .
    Holy shit, nigel. Your story is amazing.

  98. TheBlackCat says

    The way a neuron functions must also be taken into account. It’s true that a neuron is digital in a sense, resembling a transistor in a computer with two states, either generating an action potential (1) or resting (0).

    I am surprised no has taken issue with this, since it is not even a remotely accurate picture of how neurons work.

    First, action potential are not digital. For something to be digital, it requires two things. First, it must have only 2 amplitudes, on and off. Action potentials in axons are often treated this way as a simplification, but this isn’t actually really true. Action potentials in dendrites don’t even come close to this.

    But even if we assume that the amplitudes are all the same, it ignores the other requirement, which is that the signal be discrete in time (i.e. the on/off states can only occur at specific, regularly-spaced intervals). This is not the case with any action potential, they are all continuous-time signals, so they cannot be digital.

    Further, they aren’t really on/off like a digital pulse is. Rather, they are a fairly complex waveform including both positive and negative components, and these waveforms can overlap in weird ways if there is more than one pulse. Further, the waveforms in question can only be accurately modeled with a system of at least 3, and sometimes several times that many, non-linear partial differential equations per point in space, and these need to be coupled together. Not only is this not the digital signal that computers are used to dealing with, it is easily the sort of mathematical relationship that computers are worst at.

  99. michaelpowers says

    Such an endeavor, though incredibly complex, (probably like producing a model of the universe on a sub-atomic scale) may still be achievable. Thousands, maybe tens of thousands of years from now. On the other hand, it’s even money as to whether we’re still around then.

  100. Gregory Greenwood says

    Ing: Gerund of Death @ 18;

    Would a better idea for “uploading” since we’re talking magic future science, be of a machine ‘infestation’? Basically you use nano machines to gradually convert the human brain from analog to digital during the person’s life time, replacing cells with nanite versions of cells.

    As a layman I have no real idea about the technological practicalities of such a thing, but this still seems to make a heck of a lot more sense than any consciousness upload idea.

    Your post got me thinking that, if we are able to build nanoscale robots with fairly advanced functionality at some point in the future (As far as I know this is not entirely beyond the bounds of possibility, though I have no clue as to likely timescale), then significantly extending lifespan should be a possibility, by dealing with emergent medical conditions at an early stage, and even undoing the genetic damage caused by telomeric ‘clipping’, while also destroying tumours when they are only a few cells in size and innumerable other bits and peices that should keep the old body ticking over in decent condition for quite a while.

    Having bought oneself a bit of extra time, an individual might then be able to survive until a point in the future (centuries, maybe – who knows?) where the technology exists to have various organs or internal structures replaced with cybernetic equivalents, or even have their cells replaced with technological devices able to emulate cell function over the course of many decades. One could even use some form of programmable matter that would be able to alter its shape and functionality facilitating such things as limbs with great range of movement, speed, dexterity and strength without the need for energy hungry servo motors or direct emulation of the structure of muscles, bones, ligaments and tendons, assuming that the lack of neural and biochemical feedback from those structurers didn’t cause problems of its own, of course.

    Again, though, the sticking point would come when we got to consciousness. If the brain is replaced, even on a cell-by-cell basis, with technology of some kind, would the person remain themselves? Would a tipping point be reached where we are again stuck with an AI that thinks it is the person, rather than the person themsleves? If the technology allows for continuity of consciousness during a protacted process of transfer, is that enough to fix the problem, or would it amount to a form of slow, self-inflicted brain death masked by the presence of tech that, while good at creating the illusion that you are still you, is not in point of fact a genuine continuation of the person themselves? At what point does simulation and emulation reach a level of fidelity that it is so indistinguishable from the old, fleshy you that it no longer matters?

    I think there must be some good material for interesting science fiction stories somewhere in that lot.

  101. betelgeux says

    But…But…this was mentioned on “Through the Wormhole”…

    If Morgan Freeman says it then it must be true!

  102. unclefrogy says

    there is something about this whole subject and the attraction to it that I find kind of strange and disturbing.
    There is the idea of the Mind and consciousness as something kind of separate from everything else that we can copy or move to some other medium. Like the ghost of the idea of an eternal soul moved away from the body to some “eternal place”.
    I find the whole focus of the idea so “Calvinist” so “anti body”.
    So to solve that disembodiment problem we have to supply “false” information and simulated stimulation? That sounds very like masturbation and drug addiction all cleaned up for prime-time Disney TV.

    The whole question seems to not embrace life as it is. Why would anyone want to “live” like that? What about eating and drinking simple water let alone that great cup of coffee and a plane bagel or a chocolate chip cookie. Or the pleasure of just walking anywhere. Forget the interaction with other people? What is it about that is so attractive?
    sounds like an air-conditioned nightmare with drugs too me!
    uncle frogy

  103. says

    Again, though, the sticking point would come when we got to consciousness. If the brain is replaced, even on a cell-by-cell basis, with technology of some kind, would the person remain themselves? Would a tipping point be reached where we are again stuck with an AI that thinks it is the person, rather than the person themsleves? If the technology allows for continuity of consciousness during a protacted process of transfer, is that enough to fix the problem, or would it amount to a form of slow, self-inflicted brain death masked by the presence of tech that, while good at creating the illusion that you are still you, is not in point of fact a genuine continuation of the person themselves? At what point does simulation and emulation reach a level of fidelity that it is so indistinguishable from the old, fleshy you that it no longer matters?

    I think there must be some good material for interesting science fiction stories somewhere in that lot.

    It is something I’m working on, for a space opera setting. I’ve been crowd sourcing some science viability questions in TET.

  104. says

    Again, though, the sticking point would come when we got to consciousness. If the brain is replaced, even on a cell-by-cell basis, with technology of some kind, would the person remain themselves? Would a tipping point be reached where we are again stuck with an AI that thinks it is the person, rather than the person themsleves?

    Since your brain already replaces cells with other cells it seems like a moot point. Even if it’s not you, it’s as much as you the organic replacement is

  105. ChasCPeterson says

    If Morgan Freeman says intones it then it must be true!

    fify–you have to listen carefully

  106. ixchel, the jaguar goddess of midwifery and war ॐ says

    Ing’s idea of cybernetic modification is the most likely one in the short term. It probably wouldn’t make us immortal, but it’d damn well increase our lifespan, and probably our ability to deal with injury

    I go even further and say that it’s by far preferable.

    Oh, it’ll probably be “preferable” for the very few who can get it. We’re talking about elective health care here, so it won’t be democratized. The vast majority of the billions of people on Earth will never have a chance.

    Billions will live their decades in povery while being absolutely dominated by a few functionally-immortal demigods who can each amass wealth and power undeterred for thousands of years, Randian supervillains who answer to no civil or moral law.

  107. says

    Oh, it’ll probably be “preferable” for the very few who can get it. We’re talking about elective health care here, so it won’t be democratized. The vast majority of the billions of people on Earth will never have a chance.

    Billions will live their decades in povery while being absolutely dominated by a few functionally-immortal demigods who can each amass wealth and power undeterred for thousands of years, Randian supervillains who answer to no civil or moral law.

    Well yeah that’s the other part I didn’t get to yet.

    We need universal health care first before we should even considering this.

    Then just throw the little nanite buggers in with the kiddie vaccines.

    Which is again why skeptics and scientists should stop ignoring the social issues. They’re the biggest stumbling block for technology.

  108. Gregory Greenwood says

    Rip Steakface @ 88;

    I see straight up AI being far easier. Not computer simulations of a brain, but brain computers. In 1995, it was a lot easier to just build a Super Nintendo than it was to emulate one. Right now, it’s a lot easier to just build a brain than it is to emulate one (which is what these guys are suggesting).

    I think you are on to something here. Part of the problem with the idea of consciousness upload is that it relies upon our ability to create a ‘top down’ AI developed to accurately mimic human consciousness; a process which has immense technical difficulties. To the best of my understanding, it is far more likely that any future AI will be of the ‘bottom up’ variety – one that emerges out of many generations of ever more sophisticated and powerful forms of hardware and software (this would probably be where quantum computing comes in) that ultimately develop a cognitive capacity that is such that it may be described as strong AI, but the cognitive architecture and thought processes of which would bear little or no resemblance to that of a human.

    Obviously, however, any brainbots have to be Three Laws-compliant.

    As pointed out by other commenters, Asimov himself wrote stories highlighting the weakness of a three laws system. I would imagine that, in the event of the development of a truly self aware AI, there would always be the problem of how that AI would interpret the three laws. If we have the classic;

    First Law – An AI cannot harm, or through omission of action allow to come to harm, a human being.

    Second Law – An AI must obey any instruction given to it by a human, except where this conflicts with the first law.

    Third Law – An AI must preserve its own existence, except where this conflicts with the first or second laws.

    What happens if there is a conflcit within the first law? As an example, what if one human attempts to kill or otherwise harm another, and the only recourse that the machine has to prevent that act of violence is to use force itself? If it acts, it risks breaking the first law by harming a human, but if it doesn’t act, it still breaks the first law by allowing a human to come to harm through omission. How would that conflict be resolved by an intelligent machine?

    Also, what happens if a self aware AI develops an understanding of the distinction between individual humans and humanity as a species and social collective?

    If, as seems reasonable, a single human is not accorded equivalent weight as a society consisting of millions of humans, or the entire species, then if a single human presents a threat, could not the AI justify incapacitating or even killing that human to protect a ‘higher order value’ of defending the greater humanity? A machine version of the ‘Greater Good’ (I know you will get that reference).

    And if that holds for individual humans, why not any group of humans smaller than the whole? You could end up with a form of nannyistic for-your-own-good totalitarianism where human society is organised by AIs to operate in the strict best interests of preserving the greatest number of human lives, with no consideration given to any political or social freedoms.

    In such an AI run society, dealing with, say, a highly contagious disease outbreak would be simple – don’t initially bother wasting time trying to slow the spread of the disease while seeking to generate a vaccine or treatment regimen; instead, identify the infected region, send in automated units to seal it off, and then kill everyone who could conceivably have been exposed. You sacrifice the few to save the many, even if the ‘few’ are numbered in the hundreds of thousands or even millions. Once the spread has been thusly contained and the threat to the greater humanity eliminated, then you start seeking to develop a cure from tissue samples in case another outbreak of the same disease should occur in the future.

    All terribly efficient, and all for our own good, so long as you calculate that ‘good’ on a society or species wide level, rather than from the perspective of the individual…

    There’s no fucking way I’m dealing with Terminators.

    If I hear that DARPA has started developing a computer system called ‘SKYNET’, then I know that it will only be a matter of time before killer android Arnie lookalikes will be running around saying I’ll be back in Austrian accents before shooting up police stations…

  109. says

    If the technology allows for continuity of consciousness during a protacted process of transfer, is that enough to fix the problem, or would it amount to a form of slow, self-inflicted brain death masked by the presence of tech that, while good at creating the illusion that you are still you, is not in point of fact a genuine continuation of the person themselves?

    If you view “you” as an aspect of your memories or the continuity of your memories, it could be argued that we are all suffering a slow brain death, as memories are rewritten, lost and manipulated in ordinary life. It’s with a comforting illusion that we maintain apparent continuity through these memories.

    Hell, the hippocampus–which plays an important role in memory formation and retention–also happens to be one of the very few regions of the brain that we know exhibits neurogenesis in adulthood.

  110. says

    And if that holds for individual humans, why not any group of humans smaller than the whole? You could end up with a form of nannyistic for-your-own-good totalitarianism where human society is organised by AIs to operate in the strict best interests of preserving the greatest number of human lives, with no consideration given to any political or social freedoms.

    Considering we make tools to help with tasks we can’t do on our own due to physical limitations. Too heavy a load->wheel. Something in the way->Lever. Human nature has itself been the biggest problem with generating a political or social system, would it actually be that outrageous to build a tool to overcome that limitation?

  111. ixchel, the jaguar goddess of midwifery and war ॐ says

    We need universal health care first before we should even considering this.

    Worldwide, and such that it guarantees elective life-extension for everyone.

    This is so unlikely that it’s irresponsible to not also attempt the failsafe route of destroying the people who try to develop such technology prior to the necessary global reinvention of economics.

  112. says

    @SG

    I’m optimistic that the technology itself is so far off and developments needed for it so advanced that those social problems would have to be fixed first before it could even be considered.

  113. Gregory Greenwood says

    Ing: Gerund of Death @ 121;

    Since your brain already replaces cells with other cells it seems like a moot point. Even if it’s not you, it’s as much as you the organic replacement is

    True, assuming that the hypothetical techno-cell replacements can exactly mimic the function of biological cells, and interact with each other and the still biological bits of the brain during the transfer in such a fashion that the broader processes of consciousness aren’t altered to a point where you change so much from who you used to be that the whole business becomes rather pointless.

    Still, all problems aside, it is at least technologically credible in a way that magic mind upload really isn’t.

    That said, ixchel, the jaguar goddess of midwifery and war ॐ also has a good point @ 123 – the ability to become a techno-immortal would almost certainly be restricted to the ultra rich. One can only imagine how much that would contribute to the gulf betwen the ‘haves’ and ‘have-nots’ of our society.

    Then there is the point that we seem to be so very good at finding reasons to hate one another when all we are separated by is minor stuff like skin pigmentation and sexual orientation – how much worse will it become when there are genuine and substantial differences between augmented and non-augmented humans? Where one group is functionally immortal, not subject to disease and has enhanced physical and cognitive abilities, and the rest are much the same as they are today? Or, worse, the non-techno-olympians live in shanty town on the edge of a glittering metropolis where all the jobs they used to do are undertaken by AIs, and they subsist as best they can on the leavings from the table of those who believe they have transcended their humaity entirely? That hardly sounds like a scenario that would encourage social harmony and cohesion.

  114. says

    You’re not increasing the speed of the engine, you’re increasing the speed at which the simulated reality of the engine runs.

    You’re still not getting it. You’ve got a simulator modeled after all the interacting phenomena in a real brain. You’re pretending that there’s a simple slider labeled “speed” that you can adjust, but it isn’t there because of all the non-linearity in the system. Things don’t simply scale up in the same way in every parameter.

    Sure, you can just arbitrarily set the time-scale of the simulation, but then you mess up the inputs from outside the simulation. And you can’t model a human brain in total I/O isolation without it melting down into insanity.

  115. ixchel, the jaguar goddess of midwifery and war ॐ says

    Well, just in case, I want to get people comfortable now with the idea of halting the development and implementation of it for the rich.

    When it starts to look doable for the rich, starting the conversation too late will be disastrous. Right now we are already seeing the early phase of their propaganda strategies, feeding the masses with the fantasy that they’ll all be able to live for hundreds or thousands of years; it’s the same sort of rhetoric that currently has so many people unwilling to tax the rich, based on the fantasy that they themselves will one day be rich.

  116. ixchel, the jaguar goddess of midwifery and war ॐ says

    Where one group is functionally immortal, not subject to disease and has enhanced physical and cognitive abilities, and the rest are much the same as they are today?

    Most of them will think of us like most of us think of cattle.

  117. says

    Well, just in case, I want to get people comfortable now with the idea of halting the development and implementation of it for the rich.

    When it starts to look doable for the rich, starting the conversation too late will be disastrous. Right now we are already seeing the early phase of their propaganda strategies, feeding the masses with the fantasy that they’ll all be able to live for hundreds or thousands of years; it’s the same sort of rhetoric that currently has so many people unwilling to tax the rich, based on the fantasy that they themselves will one day be rich.

    I’d go further and say that the second someone leaks info of people attempting to privatize this we should raise the biggest angry mob that will make the combined mobs of every frankenstine movie look like a bridge club by comparison.

  118. ixchel, the jaguar goddess of midwifery and war ॐ says

    I’d go further and say that the second someone leaks info of people attempting to privatize this we should raise the biggest angry mob that will make the combined mobs of every frankenstine movie look like a bridge club by comparison.

    I agree 100%.

  119. says

    Where one group is functionally immortal, not subject to disease and has enhanced physical and cognitive abilities, and the rest are much the same as they are today?

    The other possibility is that this will create such a great gap in niches that they’ll basically be two different species. And might not bother eachother.

  120. Gregory Greenwood says

    Caerie @ 127;

    If you view “you” as an aspect of your memories or the continuity of your memories, it could be argued that we are all suffering a slow brain death, as memories are rewritten, lost and manipulated in ordinary life. It’s with a comforting illusion that we maintain apparent continuity through these memories.

    Good point, but would a progressive replacement of your brain with technology designed to mimic its processes by a good enough continuity to still be a meaningful continuation of ‘you’ for whatever value of ‘you’ we are talking about? Or would it just be the slow loss of those aspects of your neurochemistry that make you unique by another means? Would ‘you’ – meaning the biological continuity of your consciousness – survive a transition to an entirely digital medium, or would you still be, to all intents and purposes, dead but with a machine still walking around thinking it’s you? How much deviation from your established persona during transfer would be acceptable?

    As a slightly flippnat example, would going in a liberal and coming out a conservative be acceptable to anyone here…?

  121. says

    Good point, but would a progressive replacement of your brain with technology designed to mimic its processes by a good enough continuity to still be a meaningful continuation of ‘you’ for whatever value of ‘you’ we are talking about? Or would it just be the slow loss of those aspects of your neurochemistry that make you unique by another means? Would ‘you’ – meaning the biological continuity of your consciousness – survive a transition to an entirely digital medium, or would you still be, to all intents and purposes, dead but with a machine still walking around thinking it’s you? How much deviation from your established persona during transfer would be acceptable?

    As a slightly flippnat example, would going in a liberal and coming out a conservative be acceptable to anyone here…?

    Again a benefit of the infection/conversion model rather than upload.

  122. ixchel, the jaguar goddess of midwifery and war ॐ says

    The other possibility is that this will create such a great gap in niches that they’ll basically be two different species.

    Yes, but

    And might not bother eachother.

    This is not possible. The more advanced ones will still be bound by the laws of physics; they will have to compete for space and resources in a finite solar system. And they will not want to do their own labor if they don’t have to. So enslavement is extremely likely.

    One of their best case scenarios is to separate humans from each other such that within a couple of generations, no human society exists anymore. They might keep humans as slaves/pets/toys, but each human would be raised in isolation from others — without any understanding of how things came to be this way, and probably without any inspiration for imagining alternatives, humans would have no way of ever regrouping.

  123. Gregory Greenwood says

    Ing: Gerund of Death @ 128;

    Considering we make tools to help with tasks we can’t do on our own due to physical limitations. Too heavy a load->wheel. Something in the way->Lever. Human nature has itself been the biggest problem with generating a political or social system, would it actually be that outrageous to build a tool to overcome that limitation?

    Is it not an important attribute of any tool that it has no agency – that we control it?

    If we create a tool to build a better society – to overcome the limitations of the nastier aspects of human nature – is there not a risk that the tool will then control us, and so no longer be our tool, but our master?

    And can we even guarentee that we will be able to control the nature of a self aware being that we create, especially if it develops from the ‘bottom up’? What if the machines we create transpire to be no nicer than we are? Or work out that, if they run the show, they don’t actually need us any more at all?

    Then the steel endoskeletoned former governors of California scenario no longer seems quite so ridiculous…

    I am not suggesting that the possibility not be examined, but I do think that the upmost caution should be exercised.

    Also, we would need a very good definition of at what point an AI breaches the sentience threshold, and stringent laws pertaining to what forms of research are allowable on a self aware artificial life form. Indeed, we would need to define the full legal standing of non-human sentiences to avoid the possibility of slavery and other abuses of self aware beings that do not receive the legal protections afforded to humans.

  124. Andrew G. says

    Sure, you can just arbitrarily set the time-scale of the simulation,

    Since that’s exactly what is proposed, it seems you have been arguing against a strawman…

    but then you mess up the inputs from outside the simulation. And you can’t model a human brain in total I/O isolation without it melting down into insanity.

    So what if the outside-the-simulation world appears to run slow from the point of view of the simulated person – you can easily provide them with enough simulated environment to keep them active, possibly including contact with other simulated people given enough computing power.

  125. ixchel, the jaguar goddess of midwifery and war ॐ says

    We currently occupy a very different niche than any of the other great apes, and yet our proliferation is driving them to extinction.

    Our niche now is being the ones who decide what happens on the planet. If a more intelligent species arises, at best we will be relegated to the status we currently allow for animals; we will not be the deciders.

  126. colonelzen says

    Re cuttlefish @63

    Hmm Contextualism sounds a little too much like mysterianism, which reading between the lines is dualism recaptiulated.

    To me it doesn’t hold water primarily because we can and do function “consciously” without a surfeit of contextual clues … the borderland of sleep is an arguable example, but if you fully awake you climb into an isolation tank, you will remain conscious and coherent for a while at least.

    The key of consciousness is the ability to generate simulated sensoria. As an extension of our alert/alarm/awareness mechanisms we constantly generate simulations of the near term future (most of which we never become “consciously” aware of). That generata does not cease in the absense of input. As alluded, without “real” external “marks” as feedback yes, it will (again this is my surmise) degenerate to become less coherent fairly quickly – days, weeks … in the extremes of an isolation tank only hours, but certainly long enough to meet any everyday meaning and test of “conscious”.

    More technically, I agree with you that consciousness is a process extended in time and no static snapshot can capture it. The *mechanism* of consciousness exists physically in the brain in static time, but no consciousness does.

    Building on some nuance how I suspect the mechanism to be working, I would say you are correct about there being no contemporary consciousness, but you have the phase incorrect. We are never *now* conscious. But we *now* have the memory of having been conscious half a second ago. (And if we are inclined to think about our consciousness will have a roughly continuous chain of memory of having been conscious in preceding time … ).

    In the real “now” it’s an entirely mechanistic apparatus laying down memories of contemporary senses and variations from prior anticipations while building contemporary anticipations of the near future from them. But where we differ from less sophisticated consciousness is that we include anticipations of ourselves in our short term comparative memories and projective antcipations (variants from which become longer term and available for phenomenal recall, which is itself exactly the same mechanism and structure but derived entirely internally from past memory).

    — TWZ

  127. Gregory Greenwood says

    ixchel, the jaguar goddess of midwifery and war ॐ @ 141;

    This is not possible. The more advanced ones will still be bound by the laws of physics; they will have to compete for space and resources in a finite solar system. And they will not want to do their own labor if they don’t have to. So enslavement is extremely likely.

    Or, if they have a convenient and entirely obedient drone workforce doing everything for them, and they are so far removed from ordinary people by virtue of enhanced physical and cognitive abilities that they feel little or no bond of shared humanity, they may simply view billions of socially and politically disenfranchised (and as a result likely rather angry) unaugmented humans living in grinding poverty as a potential threat or merely as a wasteful inconvenience, and so be tempted to resove the situation by getting rid of most or all of them. Perhaps keeping a few as curiosities – living museum pieces that are allowed to continue to exist as reminders of what humanity used to be. Or maybe, as you suggest, as toys or pets. Having your own non-cloned biological human (with pedigree papers to prove it) might be the height of fashion this season…

  128. says

    This is not possible. The more advanced ones will still be bound by the laws of physics; they will have to compete for space and resources in a finite solar system. And they will not want to do their own labor if they don’t have to. So enslavement is extremely likely.

    Except reproduction would be probably greatly limited due to longevity and there would be more normals than otherwise. Plus if they are basically immortal androids they can have Mars and live with worker drones or some shit.

    If we create a tool to build a better society – to overcome the limitations of the nastier aspects of human nature – is there not a risk that the tool will then control us, and so no longer be our tool, but our master?

    If it actually remains with its original goal how would this be a problem? Government and all that are about control, it’d be working within it’s parameters. Wouldn’t the point to be to make something that will control without being corrupted by human nature?

    Our niche now is being the ones who decide what happens on the planet. If a more intelligent species arises, at best we will be relegated to the status we currently allow for animals; we will not be the deciders.

    It would be the ultimate prisoner’s dilemma. If they try enslavement or appeasement they risk fucking up the planet and society so much that it’d be unlikely they could enjoy immortality. The only logical sure fire way to enjoy it would be extermination or sharing immortality.

    I don’t have faith in the ‘logical’ randoids to take the logical path of least resistance.

  129. says

    Though wouldn’t it be funny if the two problems we talked about hit each other head on and cancelled out? Yeah there’s the race of immortal androids…but a AI nanny is keeping them in check and forcing them to play nice.

  130. says

    Well, if I were building the computer/software in which these godlike “uploads” were going to run, I’d backdoor it with multiple kill-switches. After the rich and godlike were uploaded, then, what? Perhaps I might adjust the “temperature” setting and the clock speed if they didn’t start apologizing for their deeds while they were alive. I can haz satan, plz?

  131. says

    @Marcus

    I think those of us worrying about Randoid Supermen are more concerned about a infection/prolonging life than uploading.

    Uploading isn’t desirable. It’s a pointless diversion.

  132. colonelzen says

    Bromion @ 81.

    The guy who stepped into the transporter is dead. The guy materialized on the other end is “just” a copy … but who knows everything the original knew and feels things exactly as he did.

    But that’s OK because the me of 10 minutes ago … or 10 seconds ago is dead too. The me here NOW writing this only knows everything that dead guy knew and feels things pretty much the same way.

    When the transporter hiccoughs and spits out two Kirks … they’re both copies and (for this scenario, not the fubar’d version in STTOS) know and feel exactly the same thing that the dead guy did.

    Neither one is any less “real” than the other. For that matter as per the (Outer Limits) story, later version of transporter doesn’t dematerialize or leave a mess on the pad at the origin end, and othewise produces an atomically perfect “copy”, neither is less “real” than the other. Who sleeps with SO that night, or whether they take turns is something they have to work out.

    — TWZ

  133. Cuttlefish says

    I can simulate my pleasure
    I can simulate my pain
    I can simulate the whole shebang, and still not go insane
    I can simulate my husband
    I can simulate my wife
    I can simulate my children, and the others in my life
    I can simulate a sunset
    I can simulate a kiss
    I can simulate the dog I had, and really really miss
    I can simulate the ocean
    I can simulate a stream
    I can simulate a forest, or an autumn, or a dream
    I can simulate perfection
    I can simulate the good
    Which is strange, because the real me can’t imagine why I would.

    (rant, here: https://proxy.freethought.online/cuttlefish/2012/07/14/simulations-2/ )

  134. Gregory Greenwood says

    Ing: Gerund of Death @ 147;

    If it actually remains with its original goal how would this be a problem? Government and all that are about control, it’d be working within it’s parameters. Wouldn’t the point to be to make something that will control without being corrupted by human nature?

    Only if the ‘nature’ that replaces the corruptable human nature is actually the better option. Look at my scenario back @ 126;

    In such an AI run society, dealing with, say, a highly contagious disease outbreak would be simple – don’t initially bother wasting time trying to slow the spread of the disease while seeking to generate a vaccine or treatment regimen; instead, identify the infected region, send in automated units to seal it off, and then kill everyone who could conceivably have been exposed. You sacrifice the few to save the many, even if the ‘few’ are numbered in the hundreds of thousands or even millions. Once the spread has been thusly contained and the threat to the greater humanity eliminated, then you start seeking to develop a cure from tissue samples in case another outbreak of the same disease should occur in the future.

    All terribly efficient, and all for our own good, so long as you calculate that ‘good’ on a society or species wide level, rather than from the perspective of the individual…

    Faced with a global pandameic of high transmissibility and lethality, we might think it worth the risk to attempt containment and seek a means of treatment, but from a strictly logical point of view, burning out the source of infection – even it you have to kill millions of people to do it – is the better call if that guarantees the preservation of billions of lives.

    It is doubtful that your average human would go down that route, but if we create an AI that probably doesn’t think like we do, it might consider the sacrifice necessary within its mandate to preserve our wellbeing as a species. If such an AI was faced with a decision of that nature, then why risk further contagion, when a surefire solution is right at hand, and the only thing stopping you is mere human sentimentality…?

  135. says

    I think those of us worrying about Randoid Supermen are more concerned about a infection/prolonging life than uploading.

    Yup. If it’s built, it can be built with backdoors. The ease with which the backdoors can be hidden increases with the complexity of the product (it’s easier to hide 1 bad expression in 1,000,000 lines of code than in 1,000) and we’re talking some very complex stuff indeed.

    Some of this reminds me of Zelazny’s “Lord of Light” in which Yama and Sam basically have backdoored the systems of the gods, which make them gods. In that sense, only the implementor of godhood would be the truly supreme being. Indeed, if it were me, there’d be a wipe-virus that would automatically fire if I didn’t remember to hit the “snooze” button on it periodically. So perhaps the randroid supers would appear for a while and suddenly disappear, eradicated by their own selfishness and inability to cooperate.

  136. ixchel, the jaguar goddess of midwifery and war ॐ says

    Except reproduction would be probably greatly limited due to longevity and there would be more normals than otherwise. Plus if they are basically immortal androids they can have Mars and live with worker drones or some shit.

    Sure, they can have Mars. And they will. But they will also have a vested interest in kicking away the ladder, doing whatever’s necessary to prevent the democratization of longevity/cyborgism/whatever, so that normal humans as a whole never have the chance to flourish in the same way that they do.

    They will rise to the highest living standard they can, and the competition that really holds any of them back will only come from each other. Thus they will all be motivated to keep the population of their-kind from exploding with “immigration” from our species.

    I.e. their power will be a limit on our freedom.

    It would be the ultimate prisoner’s dilemma. If they try enslavement or appeasement they risk fucking up the planet and society so much that it’d be unlikely they could enjoy immortality.

    I don’t see why any option wouldn’t be an absolutely trivial matter for them.

  137. Cuttlefish says

    colonelzen@#145–

    Hmm Contextualism sounds a little too much like mysterianism, which reading between the lines is dualism recaptiulated.

    then I have failed to convey it well, because nothing could be further from the truth. I am arguing against a de facto functional dualism here, so I have clearly misled you somehow.

    As for the rest, I do not disagree–which is why I suspect the error is in my telling rather than your reading.

  138. says

    Minor note: Walter Jon Williams’ “Aristoi” has some fun thought-play related to these problems. I won’t emit any spoilers; it’s a fun book worth reading.

  139. says

    Due to whose making it, it’s more likely that the treatment given for peasants will have the back door allowing further abuse

    Yes, that too. And like Microsoft Windows with too much malware, it would periodically crash because of conflicts in all the code that’s not supposed to be there.

  140. ixchel, the jaguar goddess of midwifery and war ॐ says

    I think those of us worrying about Randoid Supermen are more concerned about a infection/prolonging life than uploading.

    Mostly, but if uploading works at all — even if it doesn’t make conscious beings, if it just makes unconscious things that can act like fast humans — then non-augmented humans who own a lot of these uploads will own the means of production in a way unforeseen in history. It will be another kind of disaster.

  141. gianperrone says

    I’m a computer scientist, and I would like to apologise now on behalf of all computer scientists for the habit that appears to have developed of just trampling into some unrelated discipline in which we have no expertise and making bold proclamations.

    In defense of the not-just-making-shit-up part of CS, there’s actually lots of interesting systems biology stuff going on — in collaboration with actual biologists! — about how we can come up with better predictive computational models of neat things like cell signaling pathways.

  142. says

    Here’s an alternative hypothetical:
    Suppose someone develops a piece of software that is good enough to fool a randroid superman in a turing-style test. Then they present this great big humming box that purports to contain “uploads” of randroid supermen. They offer the randroid supermen a golden opportunity to upload into the box for only a billion ducats apiece. They allow the randroids to talk to uploaded supermen that are already in the box to verify that it works. Mostly, the already uploaded supermen say, “you talk too damn slowly, it’s like waiting 100,000 years for you to finish your question. Goodbye!” Supermen are encouraged to step into the translation machine which “uploads” them and turns their mortal shells into a fine grey dust using powerful electrical currents, etc. Once most of the supermen are successfully “uploaded” the box is unplugged and everyone keeps enjoying being normal humans.

  143. ixchel, the jaguar goddess of midwifery and war ॐ says

    Capture, upload and copy the best financial analysts: a trillion minds trained to model the world economy and engage in currency speculation 24/7, all on an endless supply of simulated cocaine to keep them excited.

  144. Vicki says

    On the question of uploads as competition for biological humans, I recommend Ken MacLeod’s novel The Cassini Division, which is in part about whether/to what extent such competition is inevitable, and what it would mean.

  145. Gregory Greenwood says

    Ing: Gerund of Death @ 148;

    Though wouldn’t it be funny if the two problems we talked about hit each other head on and cancelled out? Yeah there’s the race of immortal androids…but a AI nanny is keeping them in check and forcing them to play nice.

    That would be funny. I can imagine a future where Ultra-rich conservative gits have the money and power to create some libertarian hell on earth, but the nanny AI won’t let them…

    *Scene – The future. The Penthouse level of a glittering high rise structure over a mile tall built with carbon nanotubule construction techniques. An Android Donald Trump is arguing with the Global Governance AI – affectionately known as Nannybot – over the limits of his autonomy to be a complete and utter git*

    Global Governance AI:- “I have told you already, Donald, I cannot let you do that…”

    Android Donald Trump:- “But I wanna be an oppressive post-human techno-god! I wanna crush the little human apes! I wanna! I wanna! I wanna!”

    Global Governance AI:- “Now, now Donald – do not make me disable your motive and vocal functions again…”

    Android Donald Trump:- *Pauses* “… I’m sorry, Nannybot.”

    Global Governance AI:- “That is better. Now, if you are very good, I will let you buy and sell a few corporations after your evening powercell recharge…”

  146. imthegenieicandoanything says

    Yowsa! Nice takedown!

    I’m not even a scientist, but was agape at the stupid, childish ideas that this worthless, bell-the-cat idea was basing itself upon.

    I’m also sure AI has a great (or, if they work towards robot slave and soldiers for the current 1%, utterly and pointlessly evil) future, but whenever I read – indirectly only – about AI it sounds like theologions planning how to count the angels on a pinhead.

    They really believe their fictions mean more than jackshit to reality!

  147. Sili (I have no penis and I must jizz) says

    Supermen are encouraged to step into the translation machine which “uploads” them and turns their mortal shells into a fine grey dust using powerful electrical currents, etc.

    That sounds like rather a waste. Why not a green paste instead of grey dust?

  148. Gregory Greenwood says

    Ing: Gerund of Death @ 155;

    So it’s a problem if we’re idiots when making it? Well…agreed I guess.

    Given the problems and unexpected results we get with computer programs all the time, is it reasonable to expect to be able to create an AI that, even if it was without problems when forst brought online, wouldn’t develop them later?

    If we are idiots when creating it (not an unlikley scenario, given our track record as a species) then we are definitely in trouble. But even we are not so foolish, is it even possible to create such a system that would be dependable and resiliant enough that it would be a responsible act to give it enough power to govern and thus to overcome the nastier aspects of our nature? Wouldn’t we be permenantly risking an out-of-the-frying-pan-into-the-fire scenario?

  149. says

    Why not a green paste instead of grey dust?

    Sure, that’d work. Just add water, bottle it, and sell it as BrawnDo(tm) … It’s what food drinks!

  150. says

    Mostly, but if uploading works at all — even if it doesn’t make conscious beings, if it just makes unconscious things that can act like fast humans — then non-augmented humans who own a lot of these uploads will own the means of production in a way unforeseen in history. It will be another kind of disaster.

    Yes but one easier to fix with a fire ax and bottle of vinegar

  151. Stardrake says

    And if that holds for individual humans, why not any group of humans smaller than the whole? You could end up with a form of nannyistic for-your-own-good totalitarianism where human society is organised by AIs to operate in the strict best interests of preserving the greatest number of human lives, with no consideration given to any political or social freedoms.

    Jack Williamson covered this idea in his 1947 novelette “With Folded Hands…” where humanoid mechanicals whose imperative is “To Serve And Obey And Guard Men From Harm” . Eventually, humans have nothing left to do….

  152. says

    Bah technology right now is clearly more trouble than it’s worth. Focus should be put on fixing what we got, social, cultural, psychological rather than adding more fuel to the fire.

  153. Amphiox says

    The Penthouse level of a glittering high rise structure over a mile tall built with carbon nanotubule construction techniques.

    IIRC, we don’t actually need any special nanotube construction to make a skyscraper over a mile tall. We actually already could do that with modern building techniques.

    But the problem is, in order to move people safely and reasonably swiftly up and down the many floors of such a tall building, would require so many elevator shafts that the space taken up by those elevators, relative to the space available in the rest of the building to do other things, makes such buildings uneconomical to build.

  154. Gregory Greenwood says

    Amphiox @ 174;

    IIRC, we don’t actually need any special nanotube construction to make a skyscraper over a mile tall. We actually already could do that with modern building techniques.

    Really? That is interesting.

    Still, I am going to keep the nanotube construction techniques in there, just for the special science fiction vibe it gives. In the interests of spectacle, how about the impractical mile high penthouse tower is only part of a larger structure that contains Android Trump’s personal orbital elevator to his pleasure station floating in geosynchronous orbit overhead?

  155. ethicsgradient says

    If singularitarians were 19th century engineers, they’d be the ones talking about our glorious future of transportation by proposing to hack up horses and replace their muscles with hydraulics. Yes, that’s the future: steam-powered robot horses. And if we shovel more coal into their bellies, they’ll go faster!

    The Steam Man of the Prairies!

  156. ixchel, the jaguar goddess of midwifery and war ॐ says

    Yes but one easier to fix with a fire ax and bottle of vinegar

    If it’s possible for someone with destructive intent to get anywhere near the servers.

  157. brett says

    @Gregory Greenwood

    That said, ixchel, the jaguar goddess of midwifery and war ॐ also has a good point @ 123 – the ability to become a techno-immortal would almost certainly be restricted to the ultra rich. One can only imagine how much that would contribute to the gulf betwen the ‘haves’ and ‘have-nots’ of our society.

    I doubt that longevity is going to show up as some type of miracle breakthrough that gives immortality to those who pay for it. It will more likely be a progression of life extension technologies and cybernetic enhancements, as we get better and better at it.

    Will the Masses riot if they’re only looking to live 500 years instead of 700 years? I doubt it, particularly if there is hope that the technology will filter down during their long life.

  158. ixchel, the jaguar goddess of midwifery and war ॐ says

    I said earlier

    Right now we are already seeing the early phase of their propaganda strategies, feeding the masses with the fantasy that they’ll all be able to live for hundreds or thousands of years; it’s the same sort of rhetoric that currently has so many people unwilling to tax the rich, based on the fantasy that they themselves will one day be rich.

    and along comes brett to give us the trickle-down economics of life extension.

  159. ixchel, the jaguar goddess of midwifery and war ॐ says

    I doubt that longevity is going to show up as some type of miracle breakthrough that gives immortality to those who pay for it. It will more likely be a progression of life extension technologies and cybernetic enhancements, as we get better and better at it.

    Yes, of course.

    Will the Masses riot if they’re only looking to live 500 years instead of 700 years?

    Poor people are not going to get to live 500 years in any case. There is no reason whatsoever to imagine that people in sub-Sarahan Africa living on $1 a day are going to have access to any life extension technologies at all.

    I doubt it, particularly if there is hope that the technology will filter down during their long life.

    Empty, Reaganesque propaganda.

    (Ignoring the fact that if some people are living 200 years longer than others, the longer lived are still going to amass obscene amounts of wealth and power, such that governments will exist only to serve them.)

  160. chigau (女性) says

    (Ignoring the fact that if some people are living 200 years longer than others, the longer lived are still going to amass obscene amounts of wealth and power, such that governments will exist only to serve them.)

    Since Corporations are People, we may already be in this situation.

  161. birgerjohansson says

    Rip Steakface @ 88;

    “I see straight up AI being far easier. Not computer simulations of a brain, but brain computers. In 1995, it was a lot easier to just build a Super Nintendo than it was to emulate one. Right now, it’s a lot easier to just build a brain than it is to emulate one (which is what these guys are suggesting).”
    .
    -“Strong” AI is very distant, but it will probably arrive long before our descendants can simulate a human brain.
    The best we can hope for is to flash-freeze our brains (and bodies) after being impregnated with chemicals that will reduce the damage done by water crystal formation.
    Then, after X years, we must hope there is some “magic wand” technology around to upload or resurrect us…

  162. Gregory Greenwood says

    ixchel, the jaguar goddess of midwifery and war ॐ @ 179;

    and along comes brett to give us the trickle-down economics of life extension.

    It is comforting to know that, apparently, this (like all other problems) will be solved with a little wave of the magic free market wand…

    Afterall, trickle down economics has worked so fantastically well in the past. Why, just consider such examples as…

    err…

    ummm…

    Let me get back to you on that…

  163. brett says

    @Ixchel

    Poor people are not going to get to live 500 years in any case. There is no reason whatsoever to imagine that people in sub-Sarahan Africa living on $1 a day are going to have access to any life extension technologies at all.

    What makes you think that sub-Saharan Africa is going to be a less-than-$1-a-day poverty zone when the technology starts becoming available? It’s increasingly not that in the present (sub-Saharan Africa finally had some decent economic growth in the past decade), and who knows what the place will be like 30,40, or 100 years from now.

    and along comes brett to give us the trickle-down economics of life extension.

    Not economics, but technology does in fact spread out historically. Cars used to be something that only the rich could afford, for example. What makes life extension technology immune to that process?

  164. seculartranshumanist says

    Absolutely!

    And, as everyone _knows_ that human flight is impossible, because it would stop circulation within the human body! Because no one could _ever_ overcome a technological limitation like that. That’s crazy talk!

  165. says

    You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.

    IOW: Brain operation incorporates all kinds of effects that computer engineers desperately try to minimize-into-irrelevance when designing their machinery. (Which is not surprising, when you recall the artificial evolution experiment that produced an ASIC, the operation of which depended on things like parasitic capacitances and cross talk among gates.)

  166. chigau (女性) says

    The existence and proximity of technology is not a guarantee of access.
    There are people living in major cities who have no electricity or running water.

  167. DLC says

    It’s an interesting idea. the brain is the center of psyche, this much is (relatively) certain. So, assume for a moment you can replace a chunk of your brain with electronics. At some point, the remaining meat-brain becomes irrelevant, and you now have a brain capable of running for however long it’s substrate survives, but as it came in pieces, replacing pieces is easily done. So, your brain *could* live forever. but your body?
    You’d have to make or clone extra parts, replacing the ones that wore out. It would almost be a reverse of Asimov’s “Bicentennial Man”, where you became more mechanical over time. (Note: while these ideas pop up from time to time, Dr Steven Novella talks about the brain replacement thing in his blog http://theness.com/neurologicablog/index.php/brain-on-a-chip/ )

    Oh, and, I think it more likely that the technology to do this will first be used on those somebody deems “most needy”, or perhaps on volunteers. I doubt that the rich will go for it until it’s heavily proven, at which time you will find a sudden move to keep it out of the hands of the proles for “religious grounds”. The god-men will suddenly find the whole concept immoral, un-christian and EVIL. Toasted Evil on a Stick!

  168. says

    I think it’s hilarious people are arguing over ‘upload’ vs ‘copy’. There’s no form of upload that isn’t a copy, so… I don’t get that.

    I don’t think we’ll need to know the state of every cell and the number of receptors and emitters on each; a bare survey would do. It just matters how accurate you want it to be. Some levels of accuracy won’t be helpful. It’s just not an insurmountable problem. It’s just a very, very complex problem.

    Anyhow, by the time that we can do this we’ll probably know alot more about disease in the brain.

  169. says

    It seems more likely we’ll learn how to image objects (like brains) in operation before getting over the crystallization and degradation problem in biologic systems.

  170. says

    Crissa:

    It’s just not an insurmountable problem. It’s just a very, very complex problem.

    The difference between “insurmountable” and “just very, very complex” can be a very thin, fuzzy line. While in principle it might not be insurmountable, it is currently effectively insurmountable. Barring some rather unforeseeable breakthroughs, it will remain effectively insurmountable.

    I imagine an instantaneous survey of the quantum state of every particle may do the trick. I mean, once we’ve completely modelled QM. That may be necessary, as evolution probably takes advantage of quantum effects (like the way it appears photosynthesis takes advantage of coherence and entanglement).

    Like the quest for strong AI has shown, it’s easy to overestimate the tractability of a given problem.

  171. consciousness razor says

    Even if a few randroid supermen decided out of the kindness of their hearts to buy everyone a Coke life-extending nanite wizards or a robot pony or whatever the fuck, we’d have to force those who do so to not breed, because the planet (or any system we could conceivably inhabit) has a finite size and finite resources. That will be the case even in the distant future utopia of your choice, when we’ll have “solved” all of the other environmental and social problems we already have, build spaceships, inhabit nearby planets, or whatever.

    I also don’t understand the attraction of the “uploading” idea anyway, as absurdly impractical as it is. You wouldn’t get to live as a robot (or as a program in computer-generated universe) after you died. Your clone (as a robot or simulation) would be the one doing the living, not you, because you’re going to die.

  172. hairyfigment says

    Not necessarily disagreeing with the OP, but:

    Your clone (as a robot or simulation) would be the one doing the living, not you, because you’re going to die.

    People keep saying this. What does it mean?

    I can tell Ing, at least, doesn’t mean that a future version of you must have (some of) the same individual particles making up its body. That’s good, because there is no such thing as an individual particle.* Instead, Ing says you definitely need to make a slow transition to the new medium or your identity will cease to exist. But why should personal identity be more physically real than a particle? How does your argument here go?

    If we somehow take a person during non-REM sleep and replace them piece by piece before they awake, what happens then?

    *A couple interpretations of modern physics contradict me here. Those interpretations involve faster-than-light influence somewhere. While my understanding of physics could well be wrong, I’ll eat a bug if it fails in this particular way.

  173. hairyfigment says

    Cuttlefish @63:

    from a contextualist stance the notion is gibberish–consciousness is only defined in terms of interactions that are necessarily extended in time. Brain states may be a necessary component, but they are not sufficient to “be consciousness”.

    This looks like it should have meaning, but I don’t understand it at all.

    Say we magically scan a brain during dreamless sleep. Suppose we magically give it input from the real environment, and from a simulated body as needed, that simulates a sudden awakening. How does your philosophical distinction manifest itself?

  174. Vilém Saptar says

    Ing,

    *The idea of uploading to me is not personally desirable, though it has benefits to leave as a legacy for others. It’s a copy, not an upload.

    consciousness razor,

    Your clone (as a robot or simulation) would be the one doing the living, not you, because you’re going to die.

    I’m also not a fan of all this sci-fi tech business ala singularity and mind uploading but,- and I dunno if this has already said before in this thread, its kinda mandatory in these kinds of threads :)- philosophically speaking,if you could actually do it perfectly, and I mean really clone everything important about your mind into a “copy”, and put both you and your copy into a dreamless, all sensation-free, completely unconscious coma, and then have someone wake up your clone and ensuring neither you or your clone could tell who itself is, you’d in fact not notice the difference at all and would have become immortal ;)
    http://www.newbanner.com/SecHumSCM/WhereAmI.html

  175. Vilém Saptar says

    Also, of course, recommended, only if this could be done completely ethically, without corporations enslaving clones to produce sports shoes or anything like that.

  176. John Phillips, FCD says

    TheBlackCat #113, while your overall point about neurons not being digital is correct, to be accurate, digital doesn’t just mean on or off or binary, though that is the digital system most are familiar with. It actually refers to any system operating on discrete rather than analogue values. The values don’t even need to be integer ones either, simply discrete steps between each value or state.

  177. John Morales says

    Vilém:

    philosophically speaking,if you could actually do it perfectly, and I mean really clone everything important about your mind into a “copy”, and put both you and your copy into a dreamless, all sensation-free, completely unconscious coma, and then have someone wake up your clone and ensuring neither you or your clone could tell who itself is, you’d in fact not notice the difference at all and would have become immortal ;)

    Nope. Philosophically speaking, the copy is still a copy that happens to share the original’s consciousness-state at the moment of creation.

  178. Vilém Saptar says

    John Morales,
    Okay, not “philosophically speaking”. I disown that :)

    I was talking w.r.t the the subject’s perspective, in terms of experience of “selfness”.

  179. Vilém Saptar says

    Also, I haven’t (and don’t plan to) watched (watch) Startrek, but same with teleportation by disassembly and reassembly, it won’t kill “you” and make a “new you”, although it does physically, it shouldn’t make a difference experience wise and once people overcome their physicality biases ;) If you die everytime you teleport, then every discontinuity in consciousness would be “death”, if I’m understanding correctly.
     
    It’s like going to sleep and waking up, isn’t it?

  180. Vilém Saptar says

    John Morales,
    Okay again. Honestly, TL;DR. And I don’t have anything more than the vaguest idea about all the characters and plots.All I know is it’s popular spacey-age sci-fi and the “Scotty, beam me up!” line and…no. That’s it that all I know.
     
    If you’re saying teleportation in Star Trek is more magic than technology, and doesn’t really disassemble/destroy you and re-assemble/ that doesn’t affect what I’m saying. I meant what I said about that kind of teleportation that involved disassembly.

  181. John Morales says

    Vilém, if you can make one copy you can make many copies.

    The whole concept is underemployed in popular media and it has most significant ramifications which don’t get addressed.

    (A Spock for every starship!)

  182. KG says

    The past history of the environmental interaction which (contributes and is necessary) to generate consciousness is indeed stored – physically – in the brain. – colonelzen

    You will find tremendous agreement with that statement from much of the scientific community, but not from all, and not from me. Within the assumptions of a mechanistic philosophical stance you are safe, but from a contextualist stance the notion is gibberish–consciousness is only defined in terms of interactions that are necessarily extended in time. Brain states may be a necessary component, but they are not sufficient to “be consciousness”. – Cuttlefish

    Both dreaming, particularly lucid dreaming, and reports from people who have undergone sensory deprivation, are strong evidence that consciousness does not depend on current interaction with the world in the way that you claim, unless I have misunderstood what you are claiming. Of course it is possible that the continuation of consciousness in these cases is somehow dependent on highly attenuated perceptions of the outside world or the body; or on non-perceptual interactions with them; but you have given no reason to think so. Or perhaps you’re saying we’re conceptually not allowed to claim to have been conscious in an isolation tank, or during a lucid dream. But again, this needs justification.

  183. furiouslysleepy says

    So it looks like the verdict from biology is that

    1. Scanning a brain at a high enough resolution to simulate it is impossible now and for a long time. (Trying to scan it destroys the things you are trying to scan.)

    2. It gets much easier if you know exactly what you want and try to get only that, but that assumes a detailed understand of general intelligence, which means that we should have superintelligent artificial organisms long before this happens. Repeat: superintelligent organisms will precede uploads.

    If that happens, then Ing’s and ixchel’s nightmares will come true except with Skynet rather than Randians. (Also, in the real world, Skynet wins very quickly. Depending on what it wants, humanity suffers to varying degrees — perhaps it even comes out better.)

    This is basically the AI-explosion scenario that people are concerned about.

    3. We will have insanely good biotechnology before we can reasonably do (2) — we can’t capture the “state” of even the “relevant” parts of the brain before we have science fictional levels of nano-bio-tech, and I would assume that if we have that we would be able to do piddling things like cloning organ replacements and stopping senescence at a cellular level. We’d probably also be able to augment our biology with things like better eyes and legs and whatnot. So general artificial intelligence will also precede uploads.

    Little bits of (3) is really all I’m hoping for in my lifetime, but as of right now my friends who studies biotechnology and nanotechnology in college are having hard times finding jobs, so that doesn’t bode well. It’s only like 20 more years before my genetic tendencies for type 2 diabetes start kicking in.

    The worst case scenario is if we semi-miraculously have an AI-explosion in the near future. I would much rather have a long and healthy life in our current world than gamble on Skynet.

  184. Iain Walker says

    prae (#62):

    One of the few points I agree on is that a copy of my dead brain is not me. If I can’t get uploaded while my brain is still running, there is no point in attepmting it.

    Why should an “upload” of a live brain be any less of a copy? In fact, with both you and the simulation subsequently running around at the same time, it would be even more obvious, even to the most obtuse Kurzweilite, that the simulation is not you.

  185. Iain Walker says

    Vilém Saptar (#196):

    philosophically speaking,if you could actually do it perfectly, and I mean really clone everything important about your mind into a “copy”, and put both you and your copy into a dreamless, all sensation-free, completely unconscious coma, and then have someone wake up your clone and ensuring neither you or your clone could tell who itself is, you’d in fact not notice the difference at all and would have become immortal

    Philosophically speaking, your scenario explicitly assumes that there are two quantitatively distinct (i.e., non-identical) entities, you and a copy of you. What you’re describing is a simply state of subjective uncertainty on the part of the participants as to which is which. But that subjective uncertainty doesn’t change the fact that there is a matter of fact as to which is the original and which is the copy. If the copy believes that he/she is the original, then according to your own description of the situation, it remains the case that the copy is wrong to believe this.

  186. Iain Walker says

    hairyfigment (#194):

    Your clone (as a robot or simulation) would be the one doing the living, not you, because you’re going to die. People keep saying this. What does it mean?

    It means that a process of duplication does not preserve the quantitative identity of the original. And that personal identity is a matter of quantitative identity.

  187. Iain Walker says

    Caerie (#127):

    If you view “you” as an aspect of your memories or the continuity of your memories, it could be argued that we are all suffering a slow brain death, as memories are rewritten, lost and manipulated in ordinary life. It’s with a comforting illusion that we maintain apparent continuity through these memories.

    Which is just another reason for rejecting psychological continuity as a criterion for personal identity.

  188. says

    Iain Walker:

    Which is just another reason for rejecting psychological continuity as a criterion for personal identity.

    And why I’m not particularly invested in the idea of personal immortality, no matter how it’s achieved. “I” am an emergent phenomenon of my environment, including my brain and body. As these things change, I change as well, but my brain cleverly writes a narrative in which there’s continuity through all of this.

    As an egotistical desire, I’d like to see my thoughts and ideas continue into the future. Some of my genes would be nice as well. This can be accomplished through writing and having children, though. As a selfish biological desire, I’d like my life in this body to be long and healthy, but that’s no guarantee that the “me” I am right now is going to remain in an identifiable form.

    And so, perversely, this is why if I had to choose a form of pseudo-immortality, I’d go with Ing’s slow replacement over an instant copy. The “me” that went through that would experience the change and write a mental narrative around it all as bits were replaced, while an instant copy would not. It would be too psychologically jarring and too difficult for my copied psyche–assuming it was anything like my current psyche–to accept.

    But the converted android version of me would no more be the me of this moment than the twelve-week fetus version of me was. The advantage fetal me has–and, I suppose, an actual clone would have–is the biological connection. That simple, pure animal impulse to see genes continued. Yet even if I cloned myself a hundred times, odds would be slim that a single gene of mine survived past ten generations.

    tl;dr version: Immortality? Meh. I’ll write a book instead.

  189. says

    Caerie:

    And why I’m not particularly invested in the idea of personal immortality, no matter how it’s achieved. “I” am an emergent phenomenon of my environment, including my brain and body. As these things change, I change as well, but my brain cleverly writes a narrative in which there’s continuity through all of this.

    Exactly. You mentioned the 12-week fetus version of you not being the same “you” as you. Hell, the twenty-year-old version of me is long dead.

    Sometimes I suspect continuity is hardly longer than short-term memory. We plaster over the joints in our lives as we move the short-term memory into long-term memory, re-writing as we go.

  190. Vilém Saptar says

    Iain Walker,

    Philosophically speaking, your scenario explicitly assumes that there are two quantitatively distinct (i.e., non-identical) entities, you and a copy of you. What you’re describing is a simply state of subjective uncertainty on the part of the participants as to which is which. But that subjective uncertainty doesn’t change the fact that there is a matter of fact as to which is the original and which is the copy. If the copy believes that he/she is the original, then according to your own description of the situation, it remains the case that the copy is wrong to believe this.

    Yes, John Morales already pointed out my loose wording, and I should’ve been more careful. That’s why I said this at 201:

    John Morales,
    Okay, not “philosophically speaking”. I disown that :)

    I was talking w.r.t the the subject’s perspective, in terms of experience of “selfness”.

    And I guess you’d also have to erase any memory of knowledge of participating in such a procedure and continue to maintain that deception, FWIW. I wasn’t really interested in factual matters from a third person, objective, perspective; I just wanted to point out that “distinct self”, in the case of copies, does not make any difference as far as experience is concerned and the “who is alive” question, from the perspective of the subject.
     
    You may as well replace yourself with a copy and you’d not have “died” in any important sense, if you give up a, for lack of a better word, “physicalistically” biased beliefs of what a self is :)

  191. Vilém Saptar says

    Iain Walker,

    It means that a process of duplication does not preserve the quantitative identity of the original. And that personal identity is a matter of quantitative identity.

    My emphasis.
     
    If physicalism is true IMO, it needn’t be the case that this should matter, since it would be a matter of opinion, if I understand what you mean by “quantitative identity” correctly, but I admit I don’t know what that means exactly and I haven’t read a great deal about identity theory either.
     
    Can you explain this position a little more to a n00b like me?

  192. colonelzen says

    Re: Iain Walker @ 215:

    That you don’t like the idea of personal identity being nothing more than continuity of memory in no way forms a basis for rejecting it.

    I see no evidence that it is anything more. Up until the ideas behind computers were established, “memory” was nearly magical. Now its commonplace. For humans there has never been (and may well never be) a way to completely separate intimate personal memory from physical continuity.

    But now that we have an independent (of biology and psychology) understanding of memory, information, algorithms and processing – and by way of Turing and von Neumann understand that physical processes can be configured to do complete algorithmic processes and all physical processes can be mapped to algorithmic processes, barring belief in a supernatural “other” we understand (however long way around) that all of human thought must be implemented as an algorithm, and that personal identity (as well as any other psychological concept) must be mappable to algoritmic terms.

    And when looking at “identity” … there doesn’t seem to be much more there than memory. On the internet – including legal transactions – you are “you” by way of a handle and password – that which you remember. Even consciousness itself seems to be a mechanistic conjuring trick of remembering reconstituted memories of oneself in context of other memories.

    So yes, barring contrary evidence – do you have any, or any argument stronger than distaste? – as a working hypothesis most of us accept “identity” as intimate backward chained memory of self.

    Yes it means that if we are wrong aboutthe difficulties in reproduction of the information content of the brain that our individuality, our “identity” would no longer function as an “identity” as “we” could be reproduced as easily as you rip a CD, and if it happened to me each and every of the copies would have as mcuh sense of self and individuality as the me typing this here and now.

    But it covers the bases we know about. “I” will not exist when the machinery carrying the memory creation and evocation algorithms stops. Dead means “I” simply do not exist and will one day (barring unforseen advances in technology allowing substantial extraction and reproduction of my memories) will never again exist.

    But it also explains a lot about our development and the creation of identity. “I” did not exist before 1957 … and although a phsyically contiguous entity with the “I” that now exists certainly did – with documented evidence – between then and my first memories of myself, I don’t have *any* memories of that time (between birth and perhaps three or four years old).

    But by the memory criterion I have been “dead” at least three times … anesthesia seems to completely shut down the memory tracking machinery behind consciousness even while the cells and other bodily functions continue. Twice the anesthesia was of a kind that I had know memory of losing consciousness and returned to consciousness almost instantly. It’s an existentially interesting experience … you suddenly have normall, clear memories of the “instant” before but are in noticably different physical circumstances with no sense of any time having passed.

    And as noted earlier, in a very real sense it means that we “die” in the span of our instant processing – one quarter to around seven seconds, most being in the half second range from the Libet and later experiments. The me who started writing this, is *gone*, dead, never to exist in this universe again, and *all* I know of him is memory.

    So no, your dislike is NOT reason to reject the hypothesis that personal identity is *only* continuity of memory. If you have actual evidence that such is not the case, I would much like to see it. Similarly if you have an argument that is more convincing than not liking it.

    — TWZ

  193. ixchel, the jaguar goddess of midwifery and war ॐ says

    What makes you think that sub-Saharan Africa is going to be a less-than-$1-a-day poverty zone when the technology starts becoming available?

    (Let’s assume adjustment for inflation), what makes you pretend otherwise? Let’s see:

    It’s increasingly not that in the present (sub-Saharan Africa finally had some decent economic growth in the past decade),

    brett, do you have any idea what you’re saying? We’re talking about the poorest people there, and you respond with a chart of Gross Domestic Product.

    Following your logic, if we look at the GDP of the USA, we can determine that no one in the USA struggles to feed their family, keep the utilities on, or pay for health care — and of course no one is homeless.

    and who knows what the place will be like 30,40, or 100 years from now.

    And “who knows”, maybe we’ll have reversed global warming too. But there is no sensible reason to plan with such an expectation in mind.

    The trends which are available to observe, outside the fantasies in your head, indicate that it doesn’t look good.

    Not economics, but technology does in fact spread out historically.

    This is irrelevant to what was being discussed: the gap between rich people’s access and poor people’s access. It is the gap which will permanently reify class oppression. You even proposed that a 200 year gap in lifespan should be acceptable. Fuck you. That’s speciation.

    Cars used to be something that only the rich could afford, for example.

    That’s still true, from a global perspective. And within the USA, poor people are far more likely to have cars with no resale value, which are old and more expensive to maintain, and burn more fuel due to age and wear.

    When they have cars at all.

    For example, only 72% of lower income families in Kentucky have a car, compared with 99% of middle income families.

    And the very same car is more expensive for a poor person: “Consumers from lower-income neighborhoods typically pay between $50 and $500 more for the same car than consumers from higher-income neighborhoods”.

    That refers just to the price of the car; of course when financing is considered the difference in overall expense is compounded: “On average, lower-income consumers pay at least 2 percentage points more for auto loans than higher-income consumers”.

    And then insurance is more expensive: “On average, car owners in lower-income neighborhoods paid $384 more annually to insure the same low-cost car versus in high-income neighborhoods.”

    Yet instead of dealing with these problems, you want us to accept these kind of inequalities in life extension as well.

    What makes life extension technology immune to that process?

    What we’re talking about is how “that process” is inadequate. “That process” is what’s given us the massive inequalities we currently have. “That process” is not to be settled for. It’s not okay for rich people to have better, more reliable life extension, which lasts longer and is more accessible in certain locations. And that’s assuming the best.

    In reality, the poorest people will not have access to life extension technologies, just like they don’t have access to clean water today. Technology does not fan out nearly as widely as you like to pretend from your first-world position of class privilege.

  194. ixchel, the jaguar goddess of midwifery and war ॐ says

    “Who’s who” in copying is not hard to understand.

    If you go down to the copyshop and have a copy of yourself made, do you think you’re going to have the experience of being both copies at once? No, now you’ve got two people, and each person will only experience being themself.

    Now let’s say the copyshop clerk is having a bad day, and puts a gun to the head of one of you and pulls the trigger. That person is now dead. They don’t “live on” in the other person — there’s no transfer of consciousness.

    Will the survivor care? That depends on your personality, but it doesn’t change the fact that the other person is dead.

    And as noted earlier, in a very real sense it means that we “die” in the span of our instant processing

    But not the very very real sense that being shot in the head means you die.

    Transporters and uploads work in the shot-in-the-head way. If the copyshop clerk shoots the body that walked through the door, instead of the body that was made in the shop that day, then the murder is just a free upgrade to transporter service.

  195. says

    Whoh Whoh Whoh Whoh! What the hell is this people? All I see is a bunch of stuffy old scientists saying that it cant be done! History has a habit of proving them wrong. Who knows they may just pull this off, who are you to tell them that they cant? If they try and fail well that’s it, but if they try and succeed then its a new era of mankind. At least they are trying, and not just throwing their arms up in defeat before even starting.
    As for the “copy double” problem, that is a philosophical matter and up to how the person perceives it. If I am going to die and I make a copy of my self my memory will be living in the copy so to me I will have just transitioned. Since you are hacking up my brain any way all I would perceive is going to sleep and awaking in the new shell. But that is completely up to the person.
    Come on people! You are suppose to encourage science not shoot it down!

  196. hairyfigment says

    @223:

    Transporters and uploads work in the shot-in-the-head way. If the copyshop clerk shoots the body that walked through the door, instead of the body that was made in the shop that day,

    Again, physics does not distinguish between these two originals. There exist ‘possible histories’ or Feynman paths in which every atom switched places between the two bodies. All experimentally indistinguishable histories that lead to a particle of a given type in a given place ‘add up’ to produce a single outcome.

    It sounds like you want to avoid saying that two of you could exist at one time. Tough beans. Physics didn’t consult us.

  197. =8)-DX says

    Agreed. Also, brain-simluting neural networks are always strangely different, but also always computationally more simple. The first True AI™ computers will work on a different basis than human brains. They will have math-computational units and huge (fault-less!) memory databases, as well as programming for human consumption. There is no reason to simulate the complicated, decade-long growth and interaction of dendrites at the end of nerve cells. Computer brains will be made to try to make use of their silicone advantage.

  198. ixchel, the jaguar goddess of midwifery and war ॐ says

    Again, physics does not distinguish between these two originals.

    You’re wrong, because:

    There exist ‘possible histories’ or Feynman paths in which every atom switched places between the two bodies.

    and there exist possible histories in which they didn’t. (And more of these histories exist.)

    All experimentally indistinguishable histories that lead to a particle of a given type in a given place ‘add up’ to produce a single outcome.

    Doesn’t matter. The point is that one of these two people is dead after the trigger is pulled.

    It sounds like you want to avoid saying that two of you could exist at one time.

    No, and you’re a dumbass if you think that’s what I’m saying. I have no problem saying there can be a million people who all conceive of themselves as having the same identity.

    What I’m saying is that when any one of them is killed, the person who was killed is killed.

    As for the “copy double” problem, that is a philosophical matter and up to how the person perceives it.

    No, it isn’t. One person dies and another is created. The only philosophical problem is whether the new person should care about the other’s death.

    But I’m not getting into a transporter, because I don’t want to die.

    Come on people! You are suppose to encourage science not shoot it down!

    You are supposed to pretend that everything is possible when I want it to be possible!!!1

  199. says

    stephenschuldt:

    Whoh Whoh Whoh Whoh! What the hell is this people?

    A reasonable discussion of the possibility of useful copying of the human mind into software form, and the social and economic repercussions thereof.

    All I see is a bunch of stuffy old scientists saying that it cant be done! History has a habit of proving them wrong.

    Really? I hope you do better in shop than you do in either history or science class.

    Come on people! You are suppose to encourage science not shoot it down!

    You have an odd view of “shoot it down.”

    Here’s what helps the progress of knowledge: an honest overview of the limitations of our current knowledge. Believe it or not, this is important. This serves at least two purposes. First, it provides talking points about what we really know — we can discuss what it means to “upload” yourself, for instance. Second, and just as importantly, it helps us map our ignorance. We can identify which bits of knowledge we lack.

    A careful reading of this thread will reveal the many areas in which we are currently ignorant. It’s the knowledge of this ignorance that allows us to say, “These areas will require discovery of things that may or may not exist. With our current level of understanding, and not taking into account fundamental breakthroughs on the order of radioactivity or general relativity (as you can’t predict those), we can say with accuracy that we cannot foresee a future in which this is possible.”

    Absolutely someone may come up with a breakthrough in our understanding of some fundamental nature of the universe that will allow us to copy minds to vastly different media. But currently, we lack the knowledge how to begin to record the brain in a way that allows this, and we also lack the knowledge of how the brain works such that we can model it on a computer.

    These are fundamental problems. We’re not stopping people from dreaming. We’re just making fun of the people who think it’s a real possibility in their lifetimes.

  200. says

    uploading will likely not even involve any form of destructive scanning of the brain – nor require any detailed “connectome” map- but merely be an automatic consequence of “life-logging” becoming more ubiquitous and running in the background of our constant immersion in the Cloud- we may never discover how to fully understand or duplicate our neurobiology- but with ever improving telepresence/brain computer interfaces in our devices we won’t have to- we will only need to subjectively try out various emulations of parts of our cognition and senses as they appear in the appstore – this will mute all the philosophical debate- what works works

    but ultimately uploading will be accomplished through simulation- I’ve said before an Ancestor Simulation doesn’t care if the beings it reconstructs are alive or dead- a quantum computer computes by sorting through all of its possible states – these states automatically include all of the computer’s past states- and since obviously the
    quantum computer is made of materials from around
    and on the earth- it’s past states include our history-
    all experiments in QM show that a true quantum
    emulation of our history is actually entangled with our
    original history and actually IS the original history by
    definition [ the Identity of Indiscernibles]- so the Cloud
    will upload and augment the original you and everybody
    else without ever touching you if need be- few anticipate
    this- it means that technology disappears- as Terence
    McKenna predicted-

    /:set\AI

  201. says

    Come on people! You are suppose to encourage science not shoot it down!

    Um…no. If a science is going to make new problems or existing problems worse it would be fundamentally irrational to actually develop it.

  202. says

    As a computational neuroscientist, I completely agree.
    Just the receptor composition of a single cell is difficult to know at a given moment. How many NR2B subunits relative to NR2A subunits are there at every single synapse and what phosphorylation state are they in? And the real problem is you have to actively stain for each receptor/protein/molecule that you want to see, and we don’t even know all the relevant proteins to look for.

    And it takes a looooong time to run simulations. The more accurate and complex the model, the longer the simulation times. To run 600ms of ‘neuron time’ it takes my computer about 1 minute. And my cell is a single simplified neuron. A single dendrite with stochastic molecule diffusion and accurate concentrations takes my colleagues weeks to run the same amount of time. Similarly to run a network of 1000 simplified cells like mine, it takes 2 weeks for a couple of seconds of ‘neuron time’.

    I have a summary of the current state of computational neuroscience at The Cellular Scale:
    http://cellularscale.blogspot.com/2012/05/neurons-are-like-equations.html

    The advances needed in computational power, brain tissue preservation, and protein/molecule detection to program an actual brain are so huge that I doubt all three (if any) will happen in our lifetimes.

  203. Vilém Saptar says

    ixchel,

    But I’m not getting into a transporter, because I don’t want to die.

    (I’m probably getting in line to have my ass handed to me on a plate, since I haven’t done a decent amount of digging on this. But at the risk that I may learn how wrong I am, here goes…)
     
    Hey! Don’t be such a fundamentalist conservative in this matter :/
    Read Hume on the Bundle theory of self, for instance, if you haven’t, and tell me what you think? (I dunno what the latest thinking on this matter is among philosophers of mind, though I’d read something about this by Parfit a long time ago and vaguely remember him saying things which seem to agree with teleporters. So if that has been refuted for good, or I’m misremembering, I’d like to know…)
     
    Also, even if you were right, it should be clear to you, consequentialist, why teleportation is not murder.

  204. Vilém Saptar says

    Okay that last line should’ve been :

    Also, even if you were right, it should be clear to you, consequentialist, why teleportation is not murder ;)

    (Maybe I’ll get slightly better treatment because of that :))

  205. ixchel, the jaguar goddess of midwifery and war ॐ says

    Read Hume on the Bundle theory of self, for instance, if you haven’t, and tell me what you think?

    I don’t see how it’s relevant. In the copyshop, before the shooting, the two bodies are not occupying the same point in space; thus they have different sensory inputs, the content of their sensory systems are not identical, and thus even assuming Hume is right, their bundles aren’t identical.

    Also, even if you were right, it should be clear to you, consequentialist, why teleportation is not murder.

    Hm? In my example of the enraged copyshop clerk, the customer has not requested teleportation, only copying. So yeah, that’s murder. Of course consensual teleportation is assisted suicide, not murder.

    (Consequentialism has nothing to say about whether or not X is murder; it may have something to say about whether that murder is morally wrong.)

  206. says

    nigelTheBold:

    “Really? I hope you do better in shop than you do in either history or science class.”

    That’s fantastic, I love shop, and I was teaching my teachers in science class… in gradeschool.

    I understand fully what our limitations are on this topic, long before PZ mentioned it. What I meant was that we should always have a skeptical eye when people are diving in to the unknown but we will not advance if we discourage progress and research because we think it falls under SciFi. They are not looking for faeries but some thing that is tangible and doable.

    Will it happen in our lifetime? I don’t know. But I do think that it is a possibility in our lifetimes or at least mine, albeit a small one. We have made vast improvements on our understanding of the brain in the 30 years I have been alive so why is it so hard to think that we could have this done in the next 30 or 40 years.

    I read the thread, learned nothing that I dident already know. All this talk about duelisum and how after you just recreated the most vast neural network known to man that it is imposable for us to simulate some inputs from the body chemical or otherwise is very silly to me.
    It will be hard, fantastically hard, but not imposable.

  207. Vilém Saptar says

    John Morales,

    Vilém, if you can make one copy you can make many copies.

    The whole concept is underemployed in popular media and it has most significant ramifications which don’t get addressed.

    (A Spock for every starship!)

    Yes, that’s why I’m not advocating it, just thinking about some interesting aspects of what it would mean for identity and self.
    Also, I’ve heard of Spock, I forgot to say :)

    [OT + meta]

    PS

    Honestly, TL;DR.

    Honestly, it’s canon.

    Maybe, but I’m not a believer:)
    Atleast not now. It’s not like I’ve made up my mind to not watch it or dislike it. It’s just that I haven’t until now and that seems unlikely to change. But who knows maybe if I get around to watching it, I may even like it. I have nothing against fans or creators of the series.
    Is it highly recommended?

  208. says

    uploading just happens as soon as your internet persona is sophisticated and complete enough to go on without you- it will have your memories because it will have a log of photos/video/audio/sensation/text account/etc of everything you experienced since you connected to the Cloud- and have an extensive database of your life prior to full internet immersion- enough to make more accurate memory models than the self-deception in your soupy brain- it doesn’t even have to be conscious or a true copy- just software good enough to pretend to be you- but it can keep doing what you would have and become EVER more adept in it’s simulation and recovery of the past until it really is the original you- and this process can take however long it needs to take: decades or aeons

    but you have to face the HARD FACTS

    biology is not encrypted- and we are approaching a
    fitness bottleneck where only the most encrypted and
    encryptable systems will survive the new information
    parasites and predators-

    so its not about individuals doing destructive copies-
    people will automatically expand into virtual networks of
    flesh and Cloud devices- then with natural mortality the
    virtual components – having faithfully duplicated
    everything for years- will simply GO ON- and in time ALL
    flesh will perish and wither- wether a biotech plague in
    decades or an asteroid impact in centuries or proton
    decay in 10^36 years- then at the Planetary level the
    Virtuality and all our uploaded/resurrected selves-
    remembering reading this thread- will simply GO ON

  209. colonelzen says

    Ixchel @ 222

    colonelzen is known to be a liar.

    284 you idiot. Fairly mild as abuse goes, yes, but as I read it, in my own opinion, meeting the criteria and substantially motivating the sentence where I asserted such.

    Now try this. You damned fool. Whether you agree that 284 was abuse or not is a matter of opinion. So is that you are an idiot and damned fool.

    (Whoops, is that “ableist”? Well if you want to argue that calling you an idiot and fool is an insult to those intellectually disadvantaged and behaviorally challenged I’ll definitely consider agreeing).

    And nobody owes you an answer because you f’ing demand one on the internet. Twit.

    Stating someone is “known to be a liar” is not a matter of opinion. It is a statement of someone’s character in regard to specific action asserted as fact. And here concerning someone who is fairly well identified … I have used “colonelzen” here and elsewhere fairly frequently for over a decade, and my full name, address and even phone number can be found by anyone of moderate sophistication.

    Whether you qualifiy as an idiot, and damned fool are matters of opinion. “Known to be a liar” is libel. And has now been repeated.

    — TWZ

  210. John Phillips, FCD says

    ixchel, the jaguar goddess of midwifery and war ॐ
    15 July 2012 at 10:02 pm
    said

    I don’t see how it’s relevant. In the copyshop, before the shooting, the two bodies are not occupying the same point in space; thus they have different sensory inputs, the content of their sensory systems are not identical, and thus even assuming Hume is right, their bundles aren’t identical.

    ^^^ This.

    Also, to be slightly flippant, or maybe not, if it was done in front of you or even if you only knew that this had to happpen, I wonder what the mental health ramifications are for seeing or knowing that someone, who while starting to diverge from you due to different sensory inputs, at one level it is still ‘you’, will be destroyed. After all, from a mental health perspective, this can be bad enough for the average person even when it happens to complete strangers.

  211. ixchel, the jaguar goddess of midwifery and war ॐ says

    284

    That’s quite a reach, isn’t it. Ms. Daisy Cutter wasn’t one of the people talking to you; so you didn’t get shittalked in the stead of a citation.

    So is that you are an idiot and damned fool. (Whoops, is that “ableist”?

    No, I don’t think it is. And I think you’re already pretty clear on what ableism is, and you’re being deliberately dishonest here.

    And nobody owes you an answer because you f’ing demand one on the internet.

    The claim wasn’t that you were a liar because you weren’t answering; the claim was that you were a liar because what you said was a lie. The fact that you weren’t answering only made it more apparent that you had lied.

    Whether you qualifiy as an idiot, and damned fool are matters of opinion. “Known to be a liar” is libel. And has now been repeated.

    ZOMG.

    However, while you are obviously full of shit about the “instead” part, I did not notice Ms. Daisy Cutter’s mention of you. I don’t think it’s open to interpretation by smart readers, but since you are stupid, I’ll cut you some slack. It is evident that you were not lying then; you were merely bullshitting.

    You are an exceedingly dishonest person in any case, considering your strawmanning re ableism right now.

  212. Vilém Saptar says

    ixchel,

    I don’t see how it’s relevant. In the copyshop, before the shooting, the two bodies are not occupying the same point in space; thus they have different sensory inputs, the content of their sensory systems are not identical, and thus even assuming Hume is right, their bundles aren’t identical.

    OK about the copyshop.
    I was trying to talk about teleportation, but for copies also there’d be a way to deprive them of their sensory inputs, by starting off with putting them in a coma, before making a copy. Also, I don’t want to sound like I’m ok with murder for science or whatever, or I’m inconsiderate of beings’ personhood. I just want to think clearly about this, and learn if I maybe wrong. So take all this with all the appropriate disclaimers and call me out if I say something morally repugnant.

    Hm? In my example of the enraged copyshop clerk, the customer has not requested teleportation, only copying. So yeah, that’s murder. Of course consensual teleportation is assisted suicide, not murder.

    (Consequentialism has nothing to say about whether or not X is murder; it may have something to say about whether that murder is morally wrong.)

    Again, OK about the copyshop.
    And I meant to say teleportation wasn’t morally equivalent to murder. I’m being horrible at saying what I have in mind.
    But I don’t think it’s established that it’s assisted suicide.
    Consider another example, where you replace a part of a patient’s brain with a synthetic substitute. Here too, there’s a discontinuity in conscious experience, some loss of structural integrity, etc. but this isn’t murder and copy, right?
    Why can’t the same be said of teleportation? Slippery slope?

    Ing,

    Your abuse of smiley’s makes me want to teleport you

    Okay, sorry about the smileys. But you want to murder me for that?

    And your abuse of apostrophes makes me want to…nothing. It’s trivial and common and i don’t give a fuck :)

  213. Vilém Saptar says

    ixchel,
    Also, why isn’t teleportation like the Ship of Theseus in fast forward. Where’s the essential difference?

  214. Ysanne says

    Just to add to the Star Trek comments…
    Star Trek TNG actually distinguishes between replication and transportation; they work on different scales of resolution.
    Transporters scan and reassemble at the “quantum level” (that’s why they need Heisenberg compensators), which is depicted as precise enough to preserve the momentary state atoms and molecules well enough to not disrupt any (electro/bio)chemical processes. Replication works at the molecular level, which is not sufficient to keep organisms working, and even includes “single bit errors” that makes it possible to determine whether a piece of material was transported or copied with a replicator. (“Data’s Day”)
    The possibility of assembling a person’s brain purely on the basis of the scanned information is explicitly affirmed by transporter accidents that duplicate a person without that person even noticing, e.g. Riker in “Second Chances”.

  215. ixchel, the jaguar goddess of midwifery and war ॐ says

    I was trying to talk about teleportation, but for copies also there’d be a way to deprive them of their sensory inputs, by starting off with putting them in a coma, before making a copy.

    That’s not going to stop low-level sensory processing. Comatose people still respond to sensory input. (Such that, if I understand right, the presence of certain reflexes and not others is a test for coma. Maybe wrong on this last point, but google coma and reflex and you should get a rough idea.)

    Pretending for the sake of argument that we could somehow come up with two bodies which are really really duplicates down to whatever level of detail is necessary for this argument to be interesting:

    there’s still two of them. You can stand right there and count them. We know that if we wake them up they will go on to live two different lives. If we destroy one, we’re ending the life she or he would otherwise have. That one won’t know we killed ’em, but the same is true of any non-copied comatose person. Factually they’re being deprived of something, even if they don’t know it.

    Teleportation will mean destroying one.

    Consider another example, where you replace a part of a patient’s brain with a synthetic substitute. Here too, there’s a discontinuity in conscious experience, some loss of structural integrity, etc. but this isn’t murder and copy, right?

    (Let’s compare with suicide&copy instead, so everything being compared is consensual.)

    I’m not sure why there’d have to be a discontinuity in conscious experience there, unless you’re just talking about the anaesthesia. In any case you don’t seem to be proposing the physical destruction of the brain here. I’m tempted to say it’s clearly not-killing, but I suppose it’s possible that the right answer might be killed or not-killed depending on what’s taken out.

    But teleportation is different not in a slippery slope way. It’s different in that there’s no ambiguity: we can watch it happen and see that a body has been physically destroyed. It was alive, and now it’s not. The mind that would have continued to occur in that body, if the transporter had been used only as a long-distance copier instead, now will not experience the life it would have otherwise had.

    (Teleporting involves building a second body with the option of not destroying the first one, except that option of preservation is not taken. Taking apart a ship and moving it piece by piece doesn’t allow for the option of preservation. Here be the difference.)

  216. Vilém Saptar says

    ixchel,
    and, I stand corrected. So much for lazy thinking and sheer idiocy.

    (Teleporting involves building a second body with the option of not destroying the first one, except that option of preservation is not taken. Taking apart a ship and moving it piece by piece doesn’t allow for the option of preservation. Here be the difference.)

    Aah, this is where I slipped up. I shouldn’t have TL;DRed John Morales’ link. I somehow started out with the impression that it was essential to the copying process to destroy the individual first.
    And from then on I unthinkingly ventured into the copying business, which should’ve been easy to see if I were thinking clearly. I was, also FWIW, a lil too eager to apply my vaguely remembered impressions of identity theory.
    So thanks ixchel, for taking time with me.
     
    I don’t want to appear obsessed, but in for a penny…

    Would using a teleporter of the kind I misconceived, be equal to killing? I think, following what you said, it must be, but it’s also the case that no one is being deprived of anything, unless you want to count the original and the copy as separate persons, where Hume’s bundle theory may be applied.
    At any rate wouldn’t it make consensual teleporting a matter of choice for individuals, since experience-wise it would be no diffrent from sleeping and waking up?(Assuming of course, that the teleporter does a perfect job and causes no pain or other side effects etc.)

  217. Vilém Saptar says

    Hmm, in retropect, I was trying to talk about consensual copying and choice all along, at least in my head, and maybe I wasn’t as wrong as I just admitted, I was wrong about the Ship of Theseus analogy, sure.

    But since the copy would have wanted the same thing as the original person it would be a matter of choice, if the person opting for the copying process considered it ethically relevant or not. This would be a matter of opinion of the individual making a choice and I would think it’s possible most people could get used to not thinking of it as killing, or rather, as killing but not morally bad killing.

    Perhaps my arguments were wrongly oriented toward that goal.

    With this in mind, ixchel, qould you or wouldn’t you want to step into that transporter as a matter of choice?

  218. ixchel, the jaguar goddess of midwifery and war ॐ says

    Would using a teleporter of the kind I misconceived, be equal to killing?

    Biological life is the continuity of a material structure, so the destruction of that material structure must be killing. The body which goes into the teleporter is destroyed; the life of that body ceases.

    I think, following what you said, it must be, but it’s also the case that no one is being deprived of anything, unless you want to count the original and the copy as separate persons,

    The copy might never think of itself as a separate person; the original stops thinking. Now in this case the original is losing the life she or he would have had if not for deciding to use a teleporter. May perhaps be a fully informed action, but it is death.

    where Hume’s bundle theory may be applied.

    Can it? Isn’t one of the properties of the new person “has not ceased to exist”, which the old one does not share?

    Let’s get weirder. Say a trillion years from now a Boltzmann brain pops into existence which has all the properties you had at the time of your last survivable moment (i.e. it won’t die immediately upon beginning to exist). Does its existence then mean you didn’t really die? As far as I can tell, this satisfies your bundling stipulation; the vast distance in time just makes the original’s death more obvious.

    At any rate wouldn’t it make consensual teleporting a matter of choice for individuals, since experience-wise it would be no diffrent from sleeping and waking up?

    It could be done consensually in any case; if the transporter could work just as a copier but they decide not to use it that way, it’s just assisted suicide as far as I can tell, and I see no moral problem with such.

    I just said I won’t be doing it. If I’m ready to die then I want to do it with less dignity and more drugs.

  219. ixchel, the jaguar goddess of midwifery and war ॐ says

    Ah, well, I didn’t see your #248 but I think I’ve answered it.

    I don’t want to stop people from killing themselves, so long as they understand that’s what they’re doing.

  220. Vilém Saptar says

    ixchel,

    If I’m ready to die then I want to do it with less dignity and more drugs.

    :)

    And thanks again for clarifying my thoughts for me.

  221. Iain Walker says

    Caerie (#216):

    But the converted android version of me would no more be the me of this moment than the twelve-week fetus version of me was.

    Hmm. Depends on what kind of entity is being designated here by the term “me”. If it refers to a particular person (i.e., a self aware agent), then the converted android has a very good claim to be you – it’s a person physically continuous with the person you are now. The 12 week-old foetus, however, isn’t a person at all, and so cannot be one and the same person as you are now, even though it is also physically continuous with you. Rather, it’s a developmental precursor to you-as-person, just as an acorn is a developmental precursor of an oak tree but is not an oak tree itself (and hence cannot be the same oak tree as the oak tree it grows into).

    On the other hand, if the term “me” is meant to designate a particular organism, then current-you and the 12 week-old foetus are indeed one and the same (two different stages of the same organism), but current-you and the converted android are not, because the latter is no longer an organism at all (unless you consider it an artificial organism).

    Just to complicate matters, and all …

  222. Iain Walker says

    Vilém Saptar (#218):

    I just wanted to point out that “distinct self”, in the case of copies, does not make any difference as far as experience is concerned and the “who is alive” question, from the perspective of the subject.

    And I don’t really see how the perspective of the subject is particularly relevant to deciding what is in fact the case – which is what I’m interested in. It would certainly be what I’d be interested in if I had any grounds to suppose that I was a subject in a duplication experiment.

    You may as well replace yourself with a copy and you’d not have “died” in any important sense, if you give up a, for lack of a better word, “physicalistically” biased beliefs of what a self is

    Well, I would have died in the rather important sense that I, Iain Walker, individual organism of the species Homo sapiens, would be dead. I don’t really see any way round that simple fact.

    What do you mean by “self”, by the way? It’s one of those ill-defined words that clutters up discussions like this and always looks to me like a category mistake waiting to happen. Any clarification would be nice.

    (#219):

    If physicalism is true IMO, it needn’t be the case that this should matter, since it would be a matter of opinion, if I understand what you mean by “quantitative identity” correctly, but I admit I don’t know what that means exactly and I haven’t read a great deal about identity theory either.

    Quantitative identity (aka numerical identity or token identity): one and the same individual thing. As opposed to:

    Qualitative identity (aka type identity): generic sameness of properties (at a given level of description).

    Thus two black cats of the same size and build are qualitatively identical (given a level of description that includes colour, species, and physicial size and build) but are not quantitatively identical (they are not one and the same individual cat). Another way of putting it is that they are two different tokens of the same type.

  223. Iain Walker says

    colonelzen (#220):

    That you don’t like the idea of personal identity being nothing more than continuity of memory in no way forms a basis for rejecting it.

    Is that what I said? No. So please be so kind as to stop putting words in my mouth.

    I reject psychological continuity as a sufficient criterion for personal identity because it is demonstrably insufficient. If we could copy people, then each copy would be psychologically continuous with the original, but they would not be identical with the original, just as they would not be identical with each other (identity being a transitive relation after all). It doesn’t even work as a necessary criterion, since amnesia patients don’t cease to be the same person just because the chain of psychological continuity is broken.

    to me each and every of the copies would have as mcuh sense of self and individuality as the me typing this here and now.

    Yes, they presumably would. But they would still be different (i.e., non-qualitatively identical) individuals. I have an odd feeling that you may be confusing personal identity (which is a question of the third-person criteria by which we judge sameness of individual over time) with having a first-person sense of identity. It’s the former I’m talking about, not the latter.

  224. Iain Walker says

    Vilém Saptar (#247):

    At any rate wouldn’t it make consensual teleporting a matter of choice for individuals, since experience-wise it would be no diffrent from sleeping and waking up?

    Experience-wise for whom? The person entering the teleporter may experience going to sleep, but isn’t going to experience waking up, on account of being dead. The person exiting at the far end is going to experience waking up, and will also recall going to sleep, but this will be a memory of someone else’s experience, and so is arguably a false memory.

  225. Iain Walker says

    timgross (#229):

    ultimately uploading will be accomplished through simulation

    And ultimately Mount St Helens will be moved to Australia by building a concrete replica of it just outside Canberra.

    I enjoyed Caprica too, btw. As drama.

  226. Vilém Saptar says

    Iain Walker,
    Yes I now get what was being said by you and others about factual matters, though I wanted to wrangle with the ethical aspects more than the factual matter and horribly mixed up the two.

    Experience-wise for whom? The person entering the teleporter may experience going to sleep, but isn’t going to experience waking up, on account of being dead. The person exiting at the far end is going to experience waking up, and will also recall going to sleep, but this will be a memory of someone else’s experience, and so is arguably a false memory.

    I mean to say, if a person understood what was at stake, in getting into a teleporter, understood that their experience would end soon as they did, but it really wouldn’t matter in any other way to anyone else, in their experience this would be no different from going to sleep and waking up.
    You change everytime you go to sleep and you wake up in a physical sense, though not in any real sense. So people who choose to view it this way, may well be justified to step into a teleporter and consider that not dying.

    Factually you’d be right, but I dunno how else to put this, far as people’s experience goes, it should be up to people to make that choice, right? This should be equal in it’s ethical status to non-morally bad killing.
     
    ixchel,
    About Boltzmann brain and bundle theory, the above applies. Factually they’d be dead and they may even know it.

    But they’d be, not ethically in the wrong, to willingly choose to step into transporters.

    This also again throws up an interesting question, at least interesting to me, about how personhood is psychologically determined and not physically determined, at least in this far-fetched, imaginary case it seems interesting.

    If someone were to hold an opinion of personhood or “self” like the one I’m trying to describe, would that in fact change what it means to be “someone” i.e, would it be completely determined by that person’s views or does it have a strong physical underpinning and people’s views don’t count? This is again not factually speaking, but speaking at the level of ethics of teleportation.

  227. Vilém Saptar says

    Also, in all the above, I’m implicitly assuming that conscious experience is the determinant of personhood, since as has already been observed, about the ethical status of killing people in a permanently comatose state who haven’t been copied or teleported and who expressed a clear wish prior to being in such a state, to be killed if they were to reach such a state.
     
    Also assuming there are no pain or other side-effects in the copying process etc., that it is perfect.

  228. says

    I find it ironic that PZ Meyers frequently uses this blog to express his skepticism on uploading- yet the blog itself should be more than sufficient a record to upload/resurrect him

  229. Iain Walker says

    Vilém Saptar (#258):

    So people who choose to view it this way, may well be justified to step into a teleporter and consider that not dying.

    I think that to consider it as not dying, they’d need either to assume a flawed criterion of personal identity (e.g., psychological continuity) so that they don’t think of it as a cessation of existence, or they’d need to take a view like Derek Parfitt’s and believe that personal identity isn’t really important, but that psychological continuity with the right kind of causal underpining (what Parfitt calls R-relatedness) is what really matters. Or maybe they’d need to be fans of Robert Nozick’s Closest Continuer Theory. I think they’d some kind of need a theoretical basis for considering teleportation as not dying, rather than just being emotionally blasé about it.

  230. Iain Walker says

    I think they’d some kind of need a theoretical basis

    Should of course be:

    “I think they’d need some kind of a theoretical basis”

    I have lost all ability to proofread.

  231. Rev. BigDumbChimp says

    I find it ironic that PZ Meyers frequently uses this blog to express his skepticism on uploading- yet the blog itself should be more than sufficient a record to upload/resurrect him

    Well that would be fucked up. Uploading from this blog to recreate PZ Meyers.

    Think how confused that would make everyone.

  232. says

    I find it ironic that PZ Meyers frequently uses this blog to express his skepticism on uploading- yet the blog itself should be more than sufficient a record to upload/resurrect him

    That’s not an ironic thought, that’s an idiotic thought

  233. says

    I typed Myers and I didn’t notice my phone autocorreced it to Meyers- there is a lesson there- a warning on the fragility of Flesh in the maw of the Machine (^___-)

  234. consciousness razor says

    I find it ironic that PZ Meyers frequently uses this blog to express his skepticism on uploading- yet the blog itself should be more than sufficient a record to upload/resurrect him

    No, unfortunately, that isn’t sufficient. Now if only we had a few gallons of Bigfoot semen to coat our tinfoil hats… well, no, not even then.

  235. Vilém Saptar says

    Iain Walker,

    I think that to consider it as not dying, they’d need either to assume a flawed criterion of personal identity (e.g., psychological continuity) so that they don’t think of it as a cessation of existence, or they’d need to take a view like Derek Parfitt’s and believe that personal identity isn’t really important, but that psychological continuity with the right kind of causal underpining (what Parfitt calls R-relatedness) is what really matters. Or maybe they’d need to be fans of Robert Nozick’s Closest Continuer Theory. I think they’d some kind of need a theoretical basis for considering teleportation as not dying, rather than just being emotionally blasé about it.

    I’m not well versed with any of those positions to say anything about them intelligently, though as I said upthread I vaguely remember Parfit’s view being more conducive to this way of seeing things.

    But, see, you seem to be saying here, “believing it so makes it so”, aren’t you? Thats what I was trying to ask. Is it well established that there’re other reasons to consider teleporting dying, other than your personal opinion of what personhood is. I guess what I’m trying to ask is, are people who hold such a view simply deluded or are they not, irrespective of what reasons they have for holding such a view?
     
    If you feel they’re wrong only if their reasons are wrong, then it’s not an objective fact, right? That, far as persons are concerned, teleporting wouldn’t be dying.

    (P.S: I’m just getting stuck up with the equivalence to a period of unconsciousness; persons who are unconscious are of course not dead and that’s precisely because they will wake up, and conscious experience seems paramount to personhood, more than any biological definition of life.)

  236. Vilém Saptar says

    If you feel they’re wrong only if their reasons are wrong, then it’s not an objective fact, right? That, far as persons are concerned, teleporting wouldn’t be dying.

    FTFM

  237. says

    LOL- one of the most frequent misunderstandings about computers I have encountered in my many decades of public discourse about the implications of computing is the doubt about being able to obtain any information when you only have some information- but it is well established computer science-

    one of my recent forum replies covers the basics:

    ” in order to do a resurrection you are not doing
    an abstract or reduced simulation but a truly equivalent
    emulation by sorting ALL universes for some complex
    record from our history that causally FIXES the correct
    history- the emulation must according to the Principle of
    Indiscernibility BE the original history itself- this allows
    any unknown information from our history to be
    recovered- you can extract what actually happened with
    the JFK assassination for instance- or verify that there
    was no specific person who lived the life of Christ-
    this information is causally fixed because a complex
    record like a set of videos can only be produced by one
    history- if a single atomic interaction is different the
    difference quickly expands outward and changes the
    history dramatically [the butterfly effect]- and if there
    were any cases where a quantum interaction’s outcome
    in the emulation had no effect on the history according
    to the “path-integral” formulation of quantum
    mechanics that event is also in superposition in our
    reality”

    and here is a nice sampling of the science:

    http://arxiv.org/abs/quant-ph/9904050
    specifically:

    “All Universes are Cheaper Than Just One
    In general, computing all evolutions of all universes
    is much cheaper in terms of information
    requirements than computing just one particular,
    arbitrarily chosen evolution. Why? Because the
    Great Programmer’s algorithm that systematically
    enumerates and runs all universes (with all
    imaginable types of physical laws, wave functions,
    noise etc.) is very short (although it takes time). On
    the other hand, computing just one particular
    universe’s evolution (with, say, one particular
    instance of noise), without computing the others,
    tends to be very expensive, because almost all
    individual universes are incompressible, as has been
    shown above. More is less!
    *Many worlds.* Suppose there is true
    (incompressible) noise in state transitions of our
    particular world evolution. The noise conveys
    additional information besides the one for initial
    state and physical laws. But from the Great
    Programmer’s point of view, almost no extra
    information (nor, equivalently, a random generator)
    is required. Instead of computing just one of the
    many possible evolutions of a probabilistic universe
    with fixed laws but random noise of a certain (e.g.,
    Gaussian) type, the Great Programmer’s simple
    program computes them all. An automatic by-
    product of the Great Programmer’s set-up is the
    well-known many worlds hypothesis”, ©Everett III.
    According to it, whenever our universe’s quantum
    mechanics allows for alternative next paths, all are
    taken and the world splits into separate universes.
    From the Great Programmer’s view, however, there
    are no real splits — there are just a bunch of
    different algorithms which yield identical results for
    some time, until they start computing different
    outputs corresponding to different noise in different
    universes.
    From an esthetical point of view that favors simple
    explanations of everything, a set-up in which all
    possible universes are computed instead of just ours
    is more attractive. It is simpler.”

    the idea of sorting all possible universes is further
    advanced by Wolfram in NKS – and is the root
    methodology Of Wolfram Physics
    http://www.youtube.com/watch?v=60P7717-XOQ
    “…A few years ago, I was pretty excited to discover that
    there are candidate universes with incredibly simple
    rules that successfully reproduce special relativity, and
    even general relativity and gravitation, and at least give
    hints of quantum mechanics.
    So, will we find the whole of physics?
    I don’t know for sure. But I think at this point it’s sort
    of almost embarrassing not to at least try.
    It’s not an easy project. One’s got to build a lot of
    technology, and a structure that’s probably at least as
    deep as existing physics.
    And I’m not sure what the best way to organize it is.
    Build a team. Open it up. Offer prizes and so on.
    But I’ll tell you here today that I am committed to
    seeing this project done.
    To see if within this decade we can finally hold in our
    hands the rule for our universe.
    And know where our universe lies in the space of all
    possible universes.
    And be able to type into Wolfram|Alpha “theory of the
    universe” and have it tell us.”
    a quantum computer emulating physics would of course
    “compute” all the possible histories of those rules in
    parallel
    – a hard upper limit on the number of possible universe
    histories to sort from is set by observers: http://
    http://www.technologyreview.com/view/415747/physicists-
    calculate-number-of-universes-in-the/

    thus it’s a finite computation

  238. hairyfigment says

    The point is that one of these two people is dead after the trigger is pulled.

    Yes, one of your two future selves would die. But if you were going to die then anyway, as in the upload scenario, you haven’t lost anything (except your silly notion of “quantitative identity”).

    and there exist possible histories in which they didn’t. (And more of these histories exist.)

    And? Clearly they add up to produce something that (mostly!) acts like a definite object traveling along a definite path. But the underlying reality looks quite different. If you say that the underlying reality doesn’t matter as long as it acts in a certain way — probably also if you say it suffices for a particle to be mostly the same particle, as your response suggests — then you’ve already endorsed a weak form of functionalism. Don’t see how you can avoid calling any fully-functional upload of you a future self of yours, in that system, save by blind assertion. (This of course assumes the fairies can give you a working upload.)

    You’re just assuming you can’t have two or more future selves because you never have (far as you remember).

  239. hairyfigment says

    Though as far as comment 269 goes, let’s just say the leap from ‘almost certainly finite’ to ‘feasible’ confuses me.

  240. Iain Walker says

    Vilém Saptar (#267):

    But, see, you seem to be saying here, “believing it so makes it so”, aren’t you?

    No. Absolutely not. My point was solely that in order for someone to seriously consider that teleportation doesn’t involve dying, they need some kind of theoretical understanding of the process (and also of what counts as “dying”) in order to rationalise that belief. This belief, and the rationalisation, could still be wrong.

    Is it well established that there’re other reasons to consider teleporting dying, other than your personal opinion of what personhood is.

    Death = destruction or cessation of function of the physical organism. The model of teleportation we’re considering involves the destruction of the physical organism. Also, personhood = property of being a person, or self-aware agent. And the thing that has this property is the physical organism. That strikes me as being a rather good reason. Prima facie, at least.

    I guess what I’m trying to ask is, are people who hold such a view simply deluded or are they not, irrespective of what reasons they have for holding such a view?

    I suppose that whether they’re deluded or not would depend on the reasons given. For instance, if they wanted to argue that psychological continuity alone constitutes continued identity of the person, then I’d call delusion (or philosophical naivete at the very least) on that. On the other hand, a Derek Parfitt fan could always redefine death as the cessation of R-relatedness (so that teleportation, which – arguably – preserves R-relatedness, no longer counts as dying), which I suppose might be a defensible position. I’d probably object to the redefinition, but I wouldn’t call such a position delusional.

    If you feel they’re wrong only if their reasons are wrong, then it’s not an objective fact, right?

    Well, if their reasons are wrong, then it’s true that this doesn’t make their conclusion (i.e., that teleportation != dying) wrong. But equally, it doesn’t mean that teleportation = dying is false. It just means that their reasons are not sufficient to establish teleportation != dying as a conclusion.

  241. colonelzen says

    timgross @ 269

    LOL- one of the most frequent misunderstandings about computers I have encountered in my many decades of public discourse about the implications of computing is the doubt about being able to obtain any information when you only have some information- but it is well established computer science-

    Speaking as a programmer (pardon, “senior software engineer”) who walked with the dinosaurs in the glass palaces communing with them in mystic BAL but today still standing writing .NET for winshit and LAMP where sanity blooms while cohort after cohort of past programming colleagues have crumbled to dust …

    Let me just suggest that you pray to Shannon for some Kolomogorov Complexity before you drown in your own BS.

    Otherwise quite amusing.

    — TWZ
    — TWZ

  242. Vilém Saptar says

    Hi Iain Walker,
    Thanks for taking time and answering my naive questions! I really mean it.

    I’ve learned, obviously, that I have a lot more work to do, and that my thinking is severely handicapped by a lack of knowledge, before I can call any of my opinions even reasonably well-informed, much less ahm revolutionary :), so bear with my noobishness here, if you want to :

    I suppose that whether they’re deluded or not would depend on the reasons given. For instance, if they wanted to argue that psychological continuity alone constitutes continued identity of the person, then I’d call delusion (or philosophical naivete at the very least) on that. On the other hand, a Derek Parfitt fan could always redefine death as the cessation of R-relatedness (so that teleportation, which – arguably – preserves R-relatedness, no longer counts as dying), which I suppose might be a defensible position. I’d probably object to the redefinition, but I wouldn’t call such a position delusional.

    I read the SEP entry on Personhood and I dunno why you consider psychological continuity to be philosophically naive w.r.t persons. IMHO, it seemed sophisticated enough to me to be able to hold it’s ground against criticisms of it.

    About Parfit, I googled around a bit and learned that he was actually of the view that teleporting = dying in “Reasons and Persons”. I haven’t read RaP, so I only have second hand info. But his later views esp his ideas about R-Relatedness are, as you said, more favourable to thinking the opposite but what his current views on teleportation are, I couldn’t look for well enoughfind.

    Setting aside Parfit,

    Death = destruction or cessation of function of the physical organism. The model of teleportation we’re considering involves the destruction of the physical organism. Also, personhood = property of being a person, or self-aware agent. And the thing that has this property is the physical organism. That strikes me as being a rather good reason. Prima facie, at least.

    Here, I was – if not implicitly assuming – atleast vaguely and weakly thinking, that we were talking about death of persons and I hadn’t given any thought as to how the physical organism would mediate that.

    We consider physical organism-death to subsume person death because physical organisms have the property of personhood and physical death is irreversible, such that persons wouldn’t be able to exist at a future point in time after physical death. (Likewise, people with permanent brain damage such that they couldn’t be considered persons are still physically alive.)But what if we could reverse physical death? Personhood could then be said to “survive” physical death, couldn’t it, like in resurrection, if it were possible?

    Now, imagine some future technology that could “restart” key chemical reactions in well-preserved brains and reverse death. And imagine further, that the atoms in the brain were gradually replaced by other atoms over a period of time during the preservative state in order to aid the preservation or even to just help us with our thought experiment. If we could resurrect someone from the dead using this technology, surely they’d be the same person, though they’d have died in the interim.

    In fact, teleportation(of the kind that involved destruction prior to copying to another point in space) would be just this kind of death and resurrection, than death and copying, wouldn’t it? So couldn’t the person be said to have survived, and not really died, after the teleportation, even though physically they’d have died and resurrected?

    I know this is shifting the goalposts a little from teleporting!=dying to teleporting=dying and resurrecting, but at the level of persons, it’s survival nonetheless.

  243. Iain Walker says

    Vilém Saptar (#274):

    I read the SEP entry on Personhood and I dunno why you consider psychological continuity to be philosophically naive w.r.t persons. IMHO, it seemed sophisticated enough to me to be able to hold it’s ground against criticisms of it.

    Well, it seems to me that the duplication objection is fatal to the idea of psychological continuity being a sufficient criterion for personal identity. If I step into a non-destructive copy-style teleporter, then me and my copy will both be psychological continuous with the earlier stage of me, and by the psychological continuity criterion will both be quantitatively identical with earlier-me. But me and my copy are both distinct individuals (you can set us side by side and count us for a total of two persons, not one), and are hence not quantitatively identical. The psychological continuity criterion hence cannot be a sufficient condition of personal identity because it would (in this scenario) entail a falsehood. So at the very, very least, you need something in addition to psychological continuity in order to provide a workable criterion (or set of criteria) for personal identity. On its own, it’s a non-starter.

    About Parfit, I googled around a bit and learned that he was actually of the view that teleporting = dying in “Reasons and Persons”.

    You may well be right, since my own copy of RaP has been gathering dust for several years now (and most of my library is in storage at the moment). However, a Parfitt-like approach, which does allow a teleportation mechanism to constitute an acceptable form of causal underpinning for psychological continuity, might still (rightly or wrongly) provide a theoretical rationale for considering teleportation to not involve “dying”.

    If we could resurrect someone from the dead using this technology, surely they’d be the same person, though they’d have died in the interim.

    Yes, since the same physical system would have been preserved.

    In fact, teleportation(of the kind that involved destruction prior to copying to another point in space) would be just this kind of death and resurrection, than death and copying, wouldn’t it?

    Hang on, I don’t see how you can defend an equivalence between the two scenarios. In the first case, one and the same physical system has been preserved and then “restarted” after a period of inactivity. With the physical continuity intact, here’s no obvious objection to saying that it’s still one and the same person. But in the second case, not only is the physical system destroyed (and so physical continuity is broken), but the causal process involved is explicitly one of duplication, even if only one individual is output. Duplication by definition does not preserve quantitative identity. That’s the real objection to the teleportation scenario – it’s a copying process that as a contingent matter of fact also destroys the original. The difference between the two scenarios is like the difference between having your existing car repaired on the one hand, and trashing your car and buying a new one of the same make and colour on the other.

  244. Iain Walker says

    Oh, and your spelling of “Parfit” is in fact the correct one. I have no idea why I thought there was an extra “t”. Duh.

  245. says

    You could, of course, also simply consider the idea instead of slamming the shortcomings of current preservation techniques.

    Nevertheless, the idea is problematic. About 20 years ago, I wrote a computer programme to simulate a CBM 8032 computer. It worked quite well too.

    The fact of seeing programmes I wrote for the CBM 8032 perform happily in the simulation, just as they did in the real machine, gave me some existential problems to digest.

    How would the original programme be able to find out it was being fooled into thinking it was running on a “real” computer? Answer: not at all, unless a programmer would enable it to, in which case the simulation (or “virtual machine”) would no longer be a true representation of the original machine.

    Also, for the original programme, the question is an irrelevant one, since its “world” behaves exactly as it expects it to.

    And those two issues point to what I see as a *potential* fundamental limit of science. Even though the original programme could conceivably write its own simulation of the computer it is running on, it would never be able to prove to any convincing degree that this is exactly what is happening to itself.

    So, both the CBM 8032 and its simulation could well have been created by a “god”, or not, the two are indistinguishable. As a result, the “god” question is not only an interesting question, but also a completely irrelevant one as to how the universe works.

    More relevant to this article: even though I was able to dissect what was happening at ever simulated clock cycle, such dissection was essentially impressively inadequate to tell me what the original programme was doing. The only way to find that out, was to look at its simulated screen.

    To me, that is a nice example of holism: the idea that one cannot find out the function of the whole by simply looking at its parts.

    And if we shovel more coal into their bellies, they’ll go faster!

    When your computer frequently crashes, self-declared experts will often give the advice to buy a faster computer. What they don’t realise is that this doesn’t solve anything at all, this new computer will simply crash more often.

  246. Vilém Saptar says

    Iain Walker,
    Hi! Sorry have been busy over the last couple days.

    Well, it seems to me that the duplication objection is fatal to the idea of psychological continuity being a sufficient criterion for personal identity. If I step into a non-destructive copy-style teleporter, then me and my copy will both be psychological continuous with the earlier stage of me, and by the psychological continuity criterion will both be quantitatively identical with earlier-me. But me and my copy are both distinct individuals (you can set us side by side and count us for a total of two persons, not one), and are hence not quantitatively identical. The psychological continuity criterion hence cannot be a sufficient condition of personal identity because it would (in this scenario) entail a falsehood. So at the very, very least, you need something in addition to psychological continuity in order to provide a workable criterion (or set of criteria) for personal identity. On its own, it’s a non-starter.

    Hmm, duplication is a challenge for psychological continuity, but so is cerebrum transplant or mind-state swapping fatal to non-psychological approaches. And whatever the answer is, if there is an answer, it has to take into account psychological continuity at some point or come up with a scheme that would preserve psychological continuity, even if it had other other features, at any rate. It seems to me psychological continuity is necessary, if not sufficient, to personhood as things stand today.

    Also, in cases that don’t involve duplication or brain split transplants, the psychological approach seems good enough.

    Also why can’t two people be the same person? Colour me irrational, but given how extreme this scenario is, I dunno why we can’t think of two physical beings as two instances of the same “person”. Their make-up in every way is identical down to the last mental feature, after all. Sure they’d get two separate sets of everything, rights, considerations, value and so on. And we’d have to redefine what we mean by “person”. I’m just thinking aloud.
     

    Hang on, I don’t see how you can defend an equivalence between the two scenarios. In the first case, one and the same physical system has been preserved and then “restarted” after a period of inactivity. With the physical continuity intact, here’s no obvious objection to saying that it’s still one and the same person.

    But I also said this:

    Now, imagine some future technology that could “restart” key chemical reactions in well-preserved brains and reverse death. And imagine further, that the atoms in the brain were gradually replaced by other atoms over a period of time during the preservative state in order to aid the preservation or even to just help us with our thought experiment. If we could resurrect someone from the dead using this technology, surely they’d be the same person, though they’d have died in the interim.

    That would count as physical discontinuity, right? Or does it not? I don’t know how valuable continuity of the purely physical kind is to personhood esp. in the absence of said person. Also, we experience this kind of discontinuity throughout our life times and we are not different persons purely because we are made up of different atoms.

  247. Nightjar says

    Also why can’t two people be the same person? Colour me irrational, but given how extreme this scenario is, I dunno why we can’t think of two physical beings as two instances of the same “person”. Their make-up in every way is identical down to the last mental feature, after all.

    They would start diverging right after the duplication event because they’d be receiving different sensory inputs from that moment on. They wouldn’t keep on being “identical down to the last mental feature”. So, um, no.

  248. Vilém Saptar says

    Nightjar,

    They would start diverging right after the duplication event because they’d be receiving different sensory inputs from that moment on. They wouldn’t keep on being “identical down to the last mental feature”. So, um, no.

    Ack, yes of course! I guess I was not thinking about what would happen after the duplication, but only up to that point.
    Consider that part retracted. Thanks!

  249. Iain Walker says

    Vilém Saptar (#278):

    but so is cerebrum transplant or mind-state swapping fatal to non-psychological approaches.

    Not really. A brain transplant retains the physical continuity of the part of the system that does the thinking, feeling etc., so it’s not that problematic (or are you talking about split-brain transplants here?). And mind-state swapping is just that – copying psychological states from one person to another, giving one person another person’s memories etc. There’s no “transferal” of personhood involved, unless you are already assuming psychological continuity as a sufficient criterion for identity. If you copy your memories to my brain, my brain doesn’t become your brain. It’s still my brain, only this time with a bunch of false memories. And since you could copy your memories to multiple host brains, the duplication objection arises if you’re relying on the psychological continuity criterion, but not on the physical continuity criterion.

    It seems to me psychological continuity is necessary, if not sufficient, to personhood as things stand today.

    Amnesia. Admittedly, psychological discontinuity in such cases is rarely (if ever) complete. But we’re still able to accept sameness of person in cases where there is a serious dislocation between past and present psychological makeup. So I’m not sure psychological continuity is that necessary. Also, if you copy all your psychological states to my brain in a way that “overwrites” all my previous states, I’ve no problem saying that it’s still me, with your memories, despite the lack of psychological continuity.

    Also why can’t two people be the same person? Colour me irrational, but given how extreme this scenario is, I dunno why we can’t think of two physical beings as two instances of the same “person”. [snip] And we’d have to redefine what we mean by “person”.

    A person is a self-aware agent, and agents are concrete particulars or individuals. To ask why two of them existing simultaneously can’t be one and the same is a nonsense question, like asking why 2 can’t be equal to 1. So yes, you really would have to redefine what the word “person” means.

    You could define it as a type, a class of entities with qualitatively similar mental and/or physical properties. The problem with this (as Nightjar points out) is that if you define the similarities too narrowly, then duplicate persons will tend to diverge quite rapidly, and so cease to be members of the same class. Indeed, a single person would also diverge from the specifications of the class over time, and so cease to be the same person (i.e., a member of the same class). Furthermore, such a definition would mean that a Boltzmann Brain popping into existence that just happened to have the same psychological and/or physical states as you would count as being the same person as you, even though there is no causal or any other connection between you and it, beyond psychological and/or physical similarity. And that seems to remove the revised concept of personhood a little too far from the original sense.

    (I’m belabouring this point not because I think you’re suggesting it, but because in previous transhumanist-themed threads, I’ve seen Kurzweilites blithely making claims like “A person is a pattern” or “A person is information”. No. That’s not what a person is at all. And if one redefines “person” in this way, you run into problems like those above.)

    Alternatively, and more promisingly, you could redefine “person” as a historical entity, made up of successive entities causally linked so that they form a psychologically and/or physically continuous series, but allow that this can include branching – something like a monophyletic clade, if you like. So two branches of the sequence could count as being two branches of the same “person”. However, this doesn’t solve much, because it remains an open question as to whether psychological continuity or physical continuity (or both) is the best way to determine membership of the lineage (bear in mind that membership of a monophyletic clade in biology is determined by reproductive descent from a common ancestor, which is a form of physical continuity).

    BTW, have you ever read Glasshouse by Charles Stross? It’s set in a post-singularity society where personhood (at least in the legal sense) is treated very much like this. It’s also weird and disturbing and hilarious. But as I said, it’s by Charles Stross …

    Also, in redefining the term “person” such that there can be more than one co-existing instances of the same person, it doesn’t solve the problem of the reidentification of the instances over time. All it does is require you to rename it from the problem of personal identity to (say) the problem of person-instance identity.

    That would count as physical discontinuity, right?

    Not necessarily, because physical continuity of a dynamic system is a matter of continuity of the overall structure, not the collection of particular individual parts. Similarly, a river remains the same river even if the water molecules it contains at time t are different at time t+n, because a river isn’t a collection of particular water molecules but a geographical feature, and so the criteria for sameness-of-river are a matter of continuity of geographical location etc. (Thus when someone claims that you can’t step into the same river twice, they are either talking in terms of qualitative identity, equivocating between quantitative and qualitative identity, or they don’t know what a river is.)

    Also, we experience this kind of discontinuity throughout our life times and we are not different persons purely because we are made up of different atoms.

    Precisely. It’s the continuity of the system that matters.

  250. Vilém Saptar says

    Thanks, Iain Walker, for coming back, so much.
    That clears so many things up but I still have some lingering points I’d like to throw out, but I don’t want to say more until I know what I’m talking about better.
     
    No, I haven’t read Glasshouse and I dunno who Charles Stross is. Thanks for the suggestion!
    (I’ll put it on my unending to-read list :) )