The idea of immortality has had great appeal since time immemorial but was thought of in terms of creating some elixir with the property of bestowing it. But more recently some people have started to think that technology may be able to actually achieve it. This article looks at the ethical implications of two of the proposed methods: rejuvenation and mind uploading.
Like a futuristic fountain of youth, rejuvenation promises to remove and reverse the damage of ageing at the cellular level…. Practically speaking, this might mean that every few years, you would visit a rejuvenation clinic. Doctors would not only remove infected, cancerous or otherwise unhealthy cells, but also induce healthy ones to regenerate more effectively and remove accumulated waste products.
…We can be pretty certain, for instance, that rejuvenation would widen the gap between the rich and poor, and would eventually force us to make decisive calls about resource use, whether to limit the rate of growth of the population, and so forth.
Of course you could still die from an accident or other trauma so, as the authors say, “You’d need to avoid any risk of physical harm to have your one shot at eternity, making you among the most anxious people in history.”
Mind uploading mean the brain is digitally scanned and copied onto a computer and making multiple copies of it just in case on copy fails. But would that uploaded mind really be you in any meaningful sense? Once separated from your body, would you be some kind of zombie?
One question the essay does not address is the fact that our minds are not just neural networks that process information that can be simulated. They also have various glands that secrete neurochemicals all the time and are an essential part of its functioning. How would an uploaded mind deal with that? Also wouldn’t the uploaded mind immediately begin to diverge from the mind still in the brain because the latter would experience new sensory inputs?
The authors look at the ethical implications of each option. It is fun to speculate but apart from the ethical issues, the practical barriers are so immense that I cannot see it becoming viable. But who knows what things might be possible hundreds of years from now?
Lofty says
What rich people dream of as they head into their eighties.
Sam N says
There are a number of dynamic neural network models that use parameters that produce a potential at any point in time, determining whether the neuron will fire. You just add in a signal representing neuromodulator function to the best of our knowledge, and it will influence or modify the parameters at a node, influencing it’s potential and function, just like in the human brain. Of course, any simulation of this complexity on such a scale is presently impossible, and likely to remain so for another 20 years at least. I doubt the readiness in even 50.
As to whether simulation of the human nervous system would constitute a person, well, as someone who views integrated information theory as the best hypothesis underlying consciousness, I’d venture the answer of not zombie, but person, though I have considerable uncertainty.
This is also why I’d view producing a simulation of an adult human brain to be terribly unethical. I imagine some of the first experiments would be feeding various sensory systems white noise and looking at the whole brain dynamics. Could you imagine if you were forced into a sensory deprivation tank with intermittent white noise for arbitrary amounts of time?
Jean says
In my opinion, thinking that uploading is a way to immortality is as absurd as thinking that a picture of you is you. It will always be a simulation that will start to diverge from the real you as soon as it’s activated. And even if there was a technology that allowed you to upload into another body so that you can circumvent the issues you mention about the neurochemicals, it would still be another instance of someone who thinks he’s you; even using a clone you would not get back the same individual created by a lifetime shaping cells and cell structure in the body and brain.
The only way this makes sense is if you believe (consciously or not) in some sort of duality. If you believe that your identity and perception of self is entirely the result of physical and natural events then the best you can expect from uploading is to create a new instance that will fool others (and itself) into thinking you’re still alive.
ionopachys says
I’ve always wondered why popular science fiction and tech futurism so rarely considers the possibility of gradually replacing cells in the brain with artificial analogues that can interface with the organic system and mimic or simulate the physical functions. Hypothetically this would allow a transfer of the mind to an artificial and presumably more resilient device while preserving the unique instance of consciousness. Granted, that is even less likely to ever be possible than digitally copying the mind, but it’s probably the only way to ensure any kind of continuity of consciousness.
sonofrojblake says
I think I’m me. I’m pretty sure I’m a unique instance of a consciousness. That unique instance will entirely cease to exist in, I would estimate, at most three or four hours. No, I’m not committing suicide.
When another, different and unique instance of consciousness begins to exist, about seven or eight hours after this one stops, it will still be housed in the same lump of matter. It will have diverged considerably from the end-state of the instance which is typing this, but (and this is the good bit) it will still do a convincing impression of being the same “person”. Convincing enough to convince my wife, at any rate.
I go to sleep every night, and my conscious self, such as it is, which has existed for anything between ten and twenty hours or so, stops. I’ve been under a general anaesthetic three times and under a heavy does of ketamine once about a year and a half ago, and each time the conscious “me” stopped. My experience of existence began again, but any sense of continuity must be an illusion.
I’m not a transhumanist utopian, but equally I’m not a dogmatic Luddite. Since I don’t believe in an immortal soul, it is axiomatic that everything that is meaningfully “me” is a result of purely physical processes in a very limited three-dimensional space. It is therefore logical that such processes are in principle reproducible to an arbitrary degree of accuracy, assuming consciousness does not depend on quantum effects (and there’s no reason I know of to think that it does). If they’re reproducible in principle, then in the long term it’s not a philosophical problem, it’s an engineering problem -- understand the system, then build a functional model of it.
Note: NOT a simulation. When you simulate a weather system in a computer, the chips don’t get wet in the rain. A functional model of a brain must model what a real brain does, not simulate what it does. A proper “model” of a weather system would have real clouds. A model aircraft IS an aircraft. So far, near as we can make out, what a brain DOES is form patterns of electrical impulses, and that’s something we can already do, albeit crudely. So sooner or later, we’re going to be able to model a brain. First, we must understand it pretty fully, and so far we’re not making much progress on that despite what the transhumanists would like to believe. My money is on “later” -- to the point I doubt it will come in my lifetime, and even doubt it will come in the lifetime of my son. I have no doubt, however, that it will come.
Jean says
sonofrojblake @ #5
Simulation might not be the right term but model is also not as clear cut as you say. A set of equations can be defined as a model so a model does not necessarily require a physical implementation.
And your description of the conscious “me” seems to imply a duality. Because you stop being conscious does not mean that the rest of what defines you stops being. Everything that happens to you while unconscious still will have an impact: drugs, trauma, rest, exterior conditions, … will impact how you feel, think, react,… when you become conscious again. It’s not a simple reset from the moment you lost consciousness.
So thinking of modeling you as a brain only is extremely reductionist. I’m sure the whole body must be involved and even the different biomes house inside and on our body have an impact on who we are and how we perceive ourselves. And we are very far from understanding all that (much further away than “just” modeling a brain).
Jean says
That should have been “…biomes housed…” in the last paragraph in my reply above.
mailliw says
sonofrojblake @5My money is on “later” – to the point I doubt it will come in my lifetime, and even doubt it will come in the lifetime of my son. I have no doubt, however, that it will come.
Unless, of course, the “Gödel number” for understanding our own mind lies outside what our mind is capable of conceiving.
mailliw says
Given recent research on the importance of the enteric nervous system and even our gut bacteria in determining our moods and personalities, it begins to look as if conserving the brain, whether biologically or digitally, simply might not be enough anymore.
Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says
Of course not.
If it started with all my opinions and tendencies -- everything that would constitute “personality” -- but the facts of not only the universe but also my body changed drastically, then *I* would change drastically. What would it mean to no longer have the bone disorder that I do? How would that affect me? What would it do to no longer be able to experience kissing with someone I love or to whom am attracted? How would it change my queer activism to live in silico where not only sexual orientation but any form of activity now recognized as “sex” is completely irrelevant?
You can start out the simulation as “me”, but if it’s really at all the thoughtful person that I think I am it would end up thinking about these things and its priorities would change dramatically in a very, very short time.
Even my priorities with regard to the world: although my in silico self might still feel empathy, my life would depend on electricity, not food production. Covering Iowa in solar cells instead of soybeans is great for the new me, not necessarily as good for the in vivo me. Currently power outages are things I don’t worry about much. Would that change? It’s hard to see how it wouldn’t. Currently I’m interested in improvements transport tech, especially but not only transport tech that reduces lifetime carbon emissions and other pollutants. But if I get my input through digital cameras no matter where I “am”, is transport anything in which I would continue to be interested? Would I spend less time browsing info on the history of life and more time browsing the history of integrated circuits?
No, I can’t imagine that any simulation that starts off as me will continue to be “me” in any real sense. The amount of time I might reasonably said to remain “me” would be entirely dependent on processing power and might range from moments to days to maybe -- at best -- months.
Really: there’s no way that “me-ness” survives uploading.
sonofrojblake says
@Jean:
True, and that’s unfortunate. There is no word I can think of that means “model” as in “model aircraft” as in “physical copy”, and NOT like “mathematical model”. That’s what I meant, and tried to make explicit.
Well, yeah. But while I’m not there -- not conscious -- it affects only the crude matter. And when I come back -- wake -- “I” will deal with whatever sensory inputs “I” am supplied with, according to my past experiences.
@Crip Dyke,10:
It would mean you no longer have to deal with it. What did it “mean” when I suddenly developed a bone disorder last year? (As in, two of my bones snapped in to several pieces, some of which were poking out). I can tell you what it didn’t mean: it didn’t mean the person with the broken leg wasn’t me any more.
It seems to me that what you’re saying, taken to its logical conclusion, is that ANY major change to your life that affects your priorities and thoughts means that what deals with that change is no longer you. So the “me” that broke my leg is, what? Dead? I can tell you my life and priorities changed in the space of five seconds. They changed again this year when I became a father. Huge, shattering changes to my opinions, priorities, interests, thoughts. Am I not still me?
And yet “me-ness” is known to survive horrible injury and debilitating illness. Did, e.g. Stephen Hawking’s illness mean his “me-ness” was lost, perhaps a bit at a time? I don’t think most people would think so. Yet compare his sense of self at 20 -- a mobile, ostensibly healthy young man with his whole life ahead of him -- and the sense of self at 50, dependent on a battery-powered conveyance not just for transport but for communication. I don’t think your sense of yourself is that fragile.
Sam N says
@9,10
This was hinted in my original post, but you both are making far too big a deal about the non brain parts of the simulation--not in that they don’t matter--but in that they are truly straightforward compared to the massive undertaking of simulating the brain itself. This is why the focus is on that part. But once you have the brain going, these other concerns like neuromodulators, like hormones, that are produced outside of the brain, and affect it via the vascular system, are very easy to model. Even the enteric nervous system, as massive as it is, is far more straightforward to model than the brain. As are sensory inputs and motor outputs--those are the parts of the brain we actually understand fairly well and have hope of modeling in present times. Of course, in order to remotely feel like yourself, you would need to be placed in context where you could make sense of these inputs and outputs of the brain model. This task is far easier than the brain sim itself, and I doubt anyone who understood their significance would consent to being replicated without taking those inputs and outputs into account. This is why I noted the horrific concept of simulating human brains without taking such inputs into account, I imagine it would feel like torture, and very much believe a simulation of such complexity would be a person (albeit with uncertainty).
Sam N says
Also regarding whether the computer model of the brain would be an airplane, as model airplane does not function like an airplane, I think you are viewing this from the wrong perspective. It would be a brain, in that it WOULD have the functional capabilities of a brain of hooked up to the correct I/Os. More reasonable would be comparing a very realistic flight sim game to an airplane. Fine it’s not an airplane, but it may as well be if the I/O of that sim were hooked up to control an actual airplane.
Jean says
I don’t know what makes me “me” but I do know that if something can exist at the same time I’m alive then that thing is not “me”. So no matter how perfect the model/simulation/upload/whatever, that won’t be “me”. It could be enough for anyone else and for itself but it would simply be another instance of ‘something’ that shares something from “me” at some very distinct point of time.
And @sonofrojblake, saying that it’s just “crude matter” is exactly what I meant by the conscious or unconscious idea of a duality. It’s not just crude matter but an essential part of you. To take an analogy, you’re not just the message but the medium too. Or you’re not just the program but also the OS and the hardware (and none of it is generic or unchanging).
sonofrojblake says
@Sam N, 13:
Eh?
Well, that’s just false. A model airplane IS an airplane. Granted, its control surfaces and propulsion are controlled by a pilot on the ground with a radio control unit rather than one on board, but crucially it has to deal with the exact same environment and forces as a “real” airplane, and deals with them in the same way. A computer simulation is ultimately just a bunch of ones and zeroes and has no actual NEED to properly simulate aerodynamics -- you can cheat, if you like, to make the simulation easier to handle. I can tell you from expensive personal experience that you cannot cheat and make flying a model aircraft easier.
@Jean:
Ooh, good analogy… except it supports my argument, not yours. I’m NOT the hardware, and I’m NOT the OS. As I’ve already pointed out, most of (for instance) Stephen Hawking’s “hardware” was stripped away, but nobody suggests that what was left was not “him” in the philosophical sense. I am not my body or my sensory inputs.
I’m not even the program. I’m the session, the particular instance of a run of the program. And it’s my contention that every single day, the “session” ends. Another, separate session begins the next day, and a feature of the system is the illusion of continuity. But here’s the thing; there IS not continuity. The very hardware itself can change from session to session -- I once woke up with a titanium nail down the middle of my shinbone that hadn’t been there before, but I was still me. So far the OS seems fairly stable, but it gets regular updates.
The philosophical problem we will one day have to face is when we successfully boot up a second instance of exactly the same “program”. Their experiences will IMMEDIATELY diverge, obviously, but they will both have the strong impression that they are the original. If the biological original is still around, they’ll at least have the argument of originality on their side. But imagine the situation where it’s possible to back up a mindstate. Let’s say I back myself up tonight, then go and die tomorrow. My wife restores the backup, and “I” wake up feeling like I went to bed on Sunday November 25th and woke up… whenever. A week from now, say. In a new body. It’s not the original “me”, because he died. But he FEELS like he’s me. He, to use the currently fashionable parlance, “identifies as” me. Who are you to tell him he’s wrong?
But here’s where it gets weird: my wife could conceivably choose to have two of me. Assuming the process of uploading to a new body doesn’t destroy the saved mindstate, there’s nothing stopping someone uploading identical copies to two different bodies. Who are these people? They BOTH identify as me. They both know they’re not the original. They both know they’re not uniquely “me”, but they’d be able to deal with that much as twins deal with having siblings. But which gets to have my money/job/house? Either? Both? Neither? We’ve some interesting ethical dilemmas coming up.
Jean says
@sonofrojblake:
You say that it supports your argument then you go on to say the exact opposite in that you not only are not just the program but just the session. You seem to think that you are independent from anything material which is like any other dualist thinking (religious or not). What I’m saying (and that’s where the analogy was a bad one) is that you are not independent from the material but the result of it (and not just the brain but the whole body and probably more than just the cells with your DNA). And yes the hardware can change but that also changes you in some ways whether you’re conscious of it or not.
I agree that continuity of the perceived self is an illusion but that doesn’t change what I’m saying. Free will is also an illusion. That and all the rest of what we are is the result of chemical processes in our brains and everything that influences it (that means the rest of the body with any change that happens to it as well as everything else outside of it that has an impact on the processes). So you’re not driving what’s happening and defines “you”, you’re just along for the ride.
And if you reproduce another instance that may think it’s you and appears so to everyone else, it’s still another instance and not you (whether or not that hurts its feelings and whether or not I, or others, recognize it as a person). That may not make a difference for you but to me that’s just not a path to immortality but just major ego trip: you’re too important to just die. Having said that, I’d take the rejuvenation immortality even though that would be a very selfish act.
lanir says
The ethics of uploading would probably be more straightforward than presented. The problem would be how new a situation it was and all the myriad new ways to trivially take unethical actions. For example, a lot of the ethics of the situation would concentrate around activation of a digital copy and the differences between an upload and a normal mind housed in a human body.
The differences between digital and analog minds after activation would probably be pretty straightforward. Mitigating truly unreasonable advantages and so forth. Regulation would trail pretty far behind ethics and may never catch up at all since the rich would be the ones getting digitized. Deliberate alteration would be unethical but I can see someone selling rich people on a service to create an intelligent program based on their own mind that would play the stock market for them. Or even less ethical, based on someone else’s mind. Can you imagine having world class skills and talents in a field only to find you’re being outmaneuvered by butchered copies of yourself used by other people to enrich themselves while you fall between the cracks*?
The activation question is the real issue I think. The most ethical version requires that storage but not processing capacity be used for backups. While one copy is functioning, whether in the real world or online, others are not given processing capacity. Basically they’re unconscious but ready to go once woken up by being activated. Ethical issues come up again once multiple copies are active. Which one gets access to things you own? What if they disagree on that? Who decides when additional copies are activated? What rights does a copy activated solely to test whether it’s viable have? Especially if it will be replaced soon with an updated copy from the original still living in a human body? Likely just the right not to be tormented needlessly or kept aware longer than is strictly necessary to perform the task.
Access is obviously another area open to abuse. I can foresee someone activating an older copy of an ex-partner for example solely to relive a relationship with someone which did not continue. And if you don’t see a problem with that, what happens when the relationships drift more towards being obviously predatory? Where do you draw the line?
* Not entirely a futuristic idea apparently, look into vocaloids vs the singers who were paid a pittance to loan their voices to create these computer programs.
sonofrojblake says
Not at all. Is an instance of a program running on a computer independent of the hardware and/or software? No. Could another, functionally identical but separate instance be run simultaneously on a different set of hardware? Yes. So in that very limited sense it’s “independent”, but the different instance will necessarily be exposed to different stimuli and its responses will begin to diverge immediately. There is nothing dualist about this.
But right now, “I” am an instance that has been running for a little over an hour. I’m “just” another instance, albeit apparently running on the “original” hardware (although even that is arguable -- the current hardware bears little relation to the true original from half a century ago).
Are you telling me I’m not me?
And another one for Crip Dyke @ 10:
Consider: long before it’s possible to preserve and exchange mindstates, we’ll have figured out your comparatively simple bone disorder. What WOULD it mean if you could lie down in something that looked like an MRI scanner, and get up cured? Would the person who got up still be you, despite their experience rapidly diverging from any possible “you” that didn’t have the treatment? It seems to me that being uploaded to a new body would feel a lot like waking from a general anaesthetic to find you’d been cured of being old, or possibly dead. It would be a shock, for sure. And it would change your life, for sure. But to say the person cured isn’t “you” is a stretch.
Jean says
@sonofrojblake:
I’m not telling you you’re not you, I’m telling you you don’t understand what “you” is. And I’m not going to be the one who changes that.
consciousness razor says
sonofrojblake:
Well, I’m nobody important. But you can’t have it both ways. You already said he’s wrong, when you said this:
You said he’s not P. So you would simply contradict yourself if you also said he’s P.
He’s not wrong to have the memories or experiences that he has, nor that in fact he does think like you did before you died (in this story). The issue is that he’s not the same person as you are. He’s a copy or a clone of you. That may be interesting, and it may matter to you that something else has your memories/etc., but it doesn’t support your claims of being identical to that person.
Numerical identity isn’t the same concept as just any old kind of equivalence or similarity or whatever it is that you might care about. There is a specific, unambiguous sense in which there are two people in the scenario you describe, not one. Two. They’re objects which are made of matter, and as such, they are not located in the same place at the same time. So, no matter which way you may try to set things up, one can discern which is which. So, they’re not indiscernible because they’re not identical.
And you would not demonstrate otherwise, if for example this kind of factual information about reality were unavailable to anybody (or everybody) for whatever reason — it’s concealed somehow, you died and don’t gather knowledge, or whatever — since that would merely say that someone is ignorant of something. It would still be the case, even if nobody knows it’s true, that you are not the same person as the person who is your copy. We are still stuck with whatever reality happens to be, even if we don’t know it.
Present-day computer software is obviously not conscious and has no interests/concerns/etc., so there is no potential for it to bother a program, when we take that program and copy it indiscriminately, as we’re apt to do. The fact that we can do this to them, and we could even do it by making an equivalent program using a variety of different substrates (because such things are “substrate-independent” to use the technical jargon), still does not imply that in fact there is just one program in such a situation. No, what happens is that there can be multiple copies. It’s a thing that can be copied, but so what? Those add up to more than one thing.
People like you or me are only one thing, at any given moment. The fact is that I (like you) don’t experience things happening to multiple different biological entities which are living independently in multiple places. I’m just one person, right here, typing this to you, right there. That makes just two of us, no more and no less. If you want to imagine me having a conversation with you and with your copy, then that story involves three people, no more and no less. Thus, so-called mind-uploading, if we could do that, wouldn’t mean immortality for you. It would mean somebody else lives (for a while) and you still die.
I’m assuming we can’t license ourselves to whatever magic God would need to sprinkle on you to make your soul continue to exist after death. But if you get to do that, then you can’t tell me your story is coherent and naturalistic. It doesn’t exactly matter which rules you think you’re able to break while nobody’s looking, but you would definitely be breaking it all the same.
sonofrojblake says
Fine -- one at a time then.
It’s my contention that “I” cease to exist about once a day for several hours at a time. The lump of meat that constitutes my hardware just lies there, conscious of nothing. When I wake, I have the illusion that I’m the same “me” that went to sleep (even as my memories of doing so are hazy). Right now, that impression is reinforced by the apparently identical hardware I’ve woken up in (apart from the time it’s not -- e.g. waking up with metal in my leg). But that impression is false -- the “me” that woke up an hour ago is not the same one as went to sleep. For a start, the new me has a vivid memory of feeding a plate of uncooked sausages to an old schoolfriend, a memory the previous “me” did not have of an event that never occurred.
If “I” were to wake and find myself in something like a hospital, to be told that my body was gone and “I” now had to live in this new thing they’ve put together for me -- is that still me? There’s still just one of me and I have all my memories. This new thing was never contemporaneously existent with my old body. I FEEL like I’m me.
I have a strong impression that it would not be the “me” that is typing this, but that impression is based on the illusion I have that it’s the exact same “me” from day to day with no breaks.
https://www.smbc-comics.com/comic/2014-11-17
file thirteen says
@sonofrojblake
But there is continuity. I can prove it.
Admittedly I can only prove it for myself -- I can’t prove the rest of you aren’t zombies. And I can’t prove that I have proved it before, since if the “I” that is typing this was new, it couldn’t be sure it was present at the time the memories were being constructed. So memory can’t be trusted.
However I can prove that the current “I” continues from the current moment. At any time, let “I” be the identity currently driving my consciousness. Then at any later time, if this “I” still inhabits the current consciousness, then I can be sure my identity hasn’t been replaced by another.
To put it another way, if I go to sleep at night and wake up in the morning, my identity has continued.
But wait, you say, what if my memory misleads me into thinking that I have been the previous identity when it was really another? Then repeat the test again. You can’t be sure of anything you remember, but you can always be sure of the continuity of your consciousness as you step from moment to moment in that same consciousness!
A corollary, using a combination of Occam’s razor and induction, is that this identity is the one that’s been bound with the consciousness in this body since birth, and others are not zombies either.
Another is that (assuming you can’t notice any discrepancies in it) memory is reliable.
However this all depends on a discrete “I” not being a mere illusion of matter, and that seems unlikely. Your copies of your genes have a vested interest in ensuring you favour them over other people’s copies, so a strong sense of self is only to be expected, in evolutionary terms. If the entire sense of self is shown to be illusory, then it all becomes moot.
Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says
@sonofrojblake:
I think you’re subtly misinterpreting what I’m saying. When mentioning my bone disorder, I’m using the example of a change that is relatively small compared to the total shift of moving my consciousness to an artificial “computer” (for lack of a better word) that would run a very, very competent sim of my brain.
But what’s happening to me in the hypothetical is far, far, far more vast than simply eliminating my bone pain and enabling a more active, athletic lifestyle. There would be thousands of changes at least as consequential, and more than that there would be the problem that I **know** that a software tweak can increase my happiness and my productivity. Why wouldn’t I edit (or cause others to edit) my functioning such that I enjoy working 24 hours per day. I could even still apply myself to the same jobs where I already have talent and education. Simply by editing out a tendency to boredom without more life variety and the need for sleep (if that ever existed in the simulation to begin with), you’ve taken someone very human and made them into someone not very human at all. And the thing is, if I started out as me, I’d want very much to make those edits. There are good reasons to think that I’m better than average at a number of things, and working at them 24-hours per day I’d get even better than that. Why wouldn’t I want to contribute more to society than I could manage in 40-60 hours per week?
The sum of all the basic changes combined with the knowledge that my fundamental reality is different would result in a “me” that is different. I agree that I would still identify as “me” and my sim would experience continuity of experience and continuity of identity. But my friends really wouldn’t recognize me in my sim after a very short amount of time. *I* wouldn’t recognize me in the sim after a short amount of time because problems that are important to a being living in silico are not necessarily important to me at all, and vice versa. We would end up prioritizing very different things, not least because there would be very different things possible for each of us given the physical laws of the universe.
I sincerely doubt that the sim would ever stop thinking of itself as “me”, but if you placed the sim and me in the same situation 6 months or a year after sim-boot, I doubt very many of our important life decisions (at least as a percentage) would be similar, much less identical.
Thus I assert we can *start* with the same behavioral tendencies, but we won’t end up with them, and while both I and my sim will change, the sim will change far faster, given that it would be immediately subject to a completely different set of possibilities and limitations than has ever defined the limits of my own life.
While **someday** we might have enough experience with sims to somewhat-predict how sims grow apart from their initial personality states, by virtue of very different rules that apply to life in silico and life in the flesh, you’re not going to stop the rapid and drastic change away from the initial personality state.
It is this totality, and especially my certainty that in 6 months to a year the sim would make drastically different choices than I would make 6 months or a year after donating my memories and personality, that causes me to say that the sim would not be “me”.
Theoretically you could take things further, so that you’re not only simulating my mental processes but also all my biological processes so I still desire food and sex and have the same preferences in each of these and get the same benefits from each of these. Combine that with an inability to reprogram preferences, moods, tendencies towards feelings, etc., and you’d have something that more effectively simulates what it is to live as me. That’s not what you’re describing above, but since this is all hypothetical you *could* ask me to imagine such a world. And in that world I wouldn’t be so intent on asserting that the sim-me would no longer be me after a short period of time. Even there, however, the Kurzweil supposition imagines us “living forever”. If my sim had no limited lifespan, that too would cause a drastic change in my priorities. I could -- and would! -- engage in long-term learning projects too time consuming for me to prioritize in this finite life. I might very well shut down my functioning for a decade or two if I had short to medium term worries about the resources my sim was consuming or even if I just didn’t want to have to wait to see the next tech leap play out. Again, the immortality factor represents such a drastic change from the constraints of my current life that it’s ridiculously unlikely that my sim would retain the same priorities and behavioral tendencies (in other words, the same personality) for any significant length of time.
No, once you insert immortality to your hypothetical you make it impossible for any sim of me in that hypothetical to maintain the same personality as me -- or even anything close to it.