I am a materialist in the sense that I think that all phenomena arise due to material entities interacting according to laws of nature. I have seen no reason to think that anything supernatural or mystical is needed to be invoked to explain anything. I have sometimes been asked by people, usually the religious seeking to challenge my atheistic viewpoint that the material world is all there is and does not allow for any gods, as to how I can explain love. They seem to think that love is an immaterial quantity and that believing in its existence requires the same leap of faith as believing in a god. I reply that love is an emotion that is created by the workings of my brain that releases certain substances that cause me to have that feeling I point out that when I die, any love that I feel for anyone or anything will die with me. It does not survive the death of my brain.
But there are philosophers who, while generally spurning ideas of the supernatural, think that there is one area that defies material explanation and that is what they call the ‘hard problem’ of consciousness. I first came across this issue a long time ago at a philosophy seminar at my university but to be honest, after many years of reflecting on it, I am still puzzled as to why it is seen as something that is outside the bounds of a materialist philosophy
So what is this ‘hard problem’? Journalist Dan Falk explains.
In his book Until the End of Time (2020), the physicist Brian Greene sums up the standard physicalist view of reality: ‘Particles and fields. Physical laws and initial conditions. To the depth of reality we have so far plumbed, there is no evidence for anything else.’ This physicalist approach has a heck of a track record. For some 400 years – roughly from the time of Galileo – scientists have had great success in figuring out how the Universe works by breaking up big, messy problems into smaller ones that could be tackled quantitatively through physics, with the help of mathematics. But there’s always been one pesky outlier: the mind. The problem of consciousness resists the traditional approach of science.
To be clear, science has made great strides in studying the brain, and no one doubts that brains enable consciousness. Scientists such as Francis Crick (who died in 2004) and Christof Koch made great strides in pinpointing the neural correlates of consciousness – roughly, the task of figuring out what sorts of brain activity are associated with what sorts of conscious experience. What this work leaves unanswered, however, is why conscious experience occurs at all.
There is no universally agreed-upon definition of consciousness. Awareness, including self-awareness, comes close; experience perhaps comes slightly closer. When we look at a red apple, certain neural circuits in our brains fire – but something more than that also seems to happen: we experience the redness of the apple. As philosophers often put the question: why is it like something to be a being-with-a-brain? Why is it like something to see a red apple, to hear music, to touch the bark of a tree, and so on? This is what David Chalmers called the ‘hard problem’ of consciousness – the puzzle of how non-conscious matter, responding only to the laws of physics, gives rise to conscious experience (in contrast to the ‘easy problems’ of figuring out which sorts of brain activity are associated with which specific mental states). The existence of minds is the most serious affront to physicalism.
I must admit that I don’t get why the “existence of minds is the most serious affront to physicalism”. I get that we may not as yet know the exact mechanisms by which various experiences and feeling arise in our minds. But that just means that it is a difficult and as yet unsolved problem. There have always been such problems in science and there always will be. Why is this one so special that we rule out a priori any possibility of a material explanation.
The example of the ‘philosopher’s zombie’ is invoked to explain the hard problem.
This is where the zombie – that is, the thought experiment known as the ‘philosopher’s zombie’ – comes in. The experiment features an imagined creature exactly like you or me, but with a crucial ingredient – consciousness – missing. Though versions of the argument go back many decades, its current version was stated most explicitly by Chalmers. In his book The Conscious Mind (1996), he invites the reader to consider his zombie twin, a creature who is ‘molecule for molecule identical to me’ but who ‘lacks conscious experience entirely’. Chalmers imagines the case where he’s ‘gazing out the window, experiencing some nice green sensations from seeing the trees outside, having pleasant taste experiences through munching on a chocolate bar, and feeling a dull aching sensation in my right shoulder.’ Then he imagines his zombie twin in the exact same environment. The zombie will look and even act the same as the real David Chalmers; indeed:
he will be awake, able to report the contents of his internal states, able to focus attention in various places, and so on. It is just that none of this functioning will be accompanied by any real conscious experience. There will be no phenomenal feel. There is nothing it is like to be a zombie.
Imagining the zombie is step one in the thought experiment. In step two, Chalmers argues that if you can conceive of the zombie, then zombies are possible. And finally, step three: if zombies are possible, then physics, by itself, isn’t up to the job of explaining minds. This last step is worth examining more closely. Physicalists argue that bits of matter, moving about in accordance with the laws of physics, explain everything, including the workings of the brain and, with it, the mind. Proponents of the zombie argument counter that this isn’t enough: they argue that we can have all of those bits of matter in motion, and yet not have consciousness. In short, we could have a creature that looks like one of us, with a brain that’s doing exactly what our brains are doing – and still this creature would lack conscious experience. And therefore physics, by itself, isn’t enough to account for minds. And so physicalism must be false.
This argument seems circular to me. The thought experiment starts out by asserting that the zombie, while being identical in all material senses to me, does not have consciousness. It then asserts that the zombie, while functioning just like me, will not have any conscious experience. This is supposed to show that consciousness is not explainable in materialist terms. But Falk points out a big problem that also occurred to me.
As one begins to dissect the zombie argument, however, problems arise. To begin with, are zombies in fact logically possible? If the zombie is our exact physical duplicate, one might argue, then it will be conscious by necessity. To turn it around: it may be impossible for a being to have all the physical properties that a regular person has, and yet lack consciousness. Frankish draws a comparison with a television set. He asks if we can imagine a machine with all the electronic processes that occur in a (working) television set taking place, and yet with no picture appearing on the screen. Many of us would say no: if all of those things happen, the screen lights up as a matter of course; no extra ingredient is required.
That is how I too view the question. But clearly philosophers who study the hard problem of consciousness are seeing something different and see the existence of the philosopher’s zombie as plausible. Whenever I have heard philosophers discuss this question, I have the same reaction as when sophisticated theologians discuss things like the ontological argument for the existence of God. Both groups seem to depend on the argument that the ability to conceive of something gives that something’s existence a reality that then leads them to their desired conclusion. Both groups clearly feel that they are making a very powerful argument in favor of their stance but I just don’t get what it is that is so significant, since I cannot see how the ability to conceive of something reveals anything meaningful.
I am mindful of the danger of too quickly dismissing as nonsensical arguments that I don’t understand, especially if those arguments are meant to support conclusions that I disagree with. So I have tried hard to get to grips with this question and failed. I have the sense that I may be missing something but do not know what.
The essay goes into quite great detail on this issue but it did not solve for me my hard problem: Why philosophers consider the hard problem of consciousness to be inherently beyond a material explanation involving the brain.
steve oberski says
a creature who is ‘molecule for molecule identical to me’ but who ‘lacks conscious experience entirely’.
This is a contradiction, obviously the creature is not ‘molecule for molecule identical to me’ if it has different attributes than me.
DonDueed says
If such a zombie did in fact exist, but looked and behaved exactly like its “real human” twin, how would one determine that it did not in fact have conscious experience? It would react to any stimulus or event in the same way as its twin, and would even insist (when queried) that it did have consciousness.
So what does it even mean to say it is a zombie and not a normal human?
Rob Grigjanis says
I think terms like ‘consciousness’ and ‘free will’ have just been around for so long that many of us think they must refer to something concrete. But I’ve yet to see a satisfactory definition of either term. Mostly handwaving.
The best I’ve seen regarding ‘consciousness’ was given in Julian Jaynes’ book The Origin of Consciousness in the Breakdown of the Bicameral Mind. It may be a bunch of hooey which gets us no further in the endless blather about this stuff, but at least it was a fun read.
GerrardOfTitanServer says
On the one hand, the hard problem of consciousness is not very interesting because it has been very, very carefully constructed to be completely untested, e.g. the philosophical zombie example. On the other hand, I think that it’s the fundamental question underlying morality. I guess I take the IMHO parsimonious and/or conservative route which is is, roughly: anything which can pass the Turing test probably has first-person experience.
GerrardOfTitanServer says
Ack, completely untestable*
Brony, Social Justice Cenobite says
Read “Self comes to mind” by Antonio Damasio. He describes how “absence seizures” relate to consciousness. It’s like the part of the brain that determines current awareness of goals is off, but you still have something that will for example react to a cup by picking it up.
The same brain region may be responsible for “blind sight”, someone with no conscious vision who can still dodge objects.
I think consciousness is a director level network that binds center of attention to a larger context.
mikey says
I can’t see how you can have this “identical” zombie brain, that doesn’t have the same experiences, associations, feelings, etc. An adult brain is physically altered by experiences as it grows and develops. It’s either identical, or it’s not.
JM says
@1 steve oberski: Only if all attributes are physical. Doing the argument that way he is a way of slipping in a door for consciousness to not be a physical property. The argument can also be phrased in a way that doesn’t have that problem by taking out the absolutely identical down to the atomic level part. That opens up other lines of argument that the author was probably trying to avoid.
Deepak Shetty says
I think atleast for me , its the glib way some folks approach it , rather than a commitment from me to something supernatural.
Something along the lines of
“All materials obey the laws of Physics/Chemistry”
“Our brain is composed of physical/chemical elements”
“Therefore our brain follows the laws of physics/chemistry”
“Therefore no free will or whatever else”
Without ever explaining which law of physics or chemistry we follow for our decisions or why there couldnt be a law that said , physical elements that evolve to have consciousness dont follow physical laws!. I dont a priori dismiss a material explanation but I dont see why i should a priori dismiss other possibilities too as you seem to be doing. After all we’ve not been able to replicate this happening even with all the knowledge and tools we have -- Its not just that we dont have a explanation , its that for all we can do , we cant yet make a simple single cell microbe from inanimate materials.
Deepak Shetty says
@Rob Grigjanis
Well what would you call what we internally experience ? I’ve found better luck with philosophers who aim to use the terms to describe that experience, rather than an abstract definition after which it is evaluated that we have that or we don’t
Marcus Ranum says
Well what would you call what we internally experience ?
I call it “the self.”
Here’s a thought: if it’s something we internally experience, how can we know if others experience the same thing or in the same way? Is it the same thing at all, or perhaps does everyone’s experience of self-hood differ? [I’m going to go out on a limb and say that our sense of self appears to be mostly learned; observing infants as Piaget did ought to convince most people of that, in my opinion.]
Marcus Ranum says
Rob Grigjanis:
I think terms like ‘consciousness’ and ‘free will’ have just been around for so long that many of us think they must refer to something concrete. But I’ve yet to see a satisfactory definition of either term.
“Love”, “Freedom” and so forth. It’s always seemed to me that humans place a great deal of stock in things they don’t understand and can’t define (and probably can’t verify that they exist) -- I’ve always felt that the absence of a theory of ensoulment speaks volumes.
On the other hand, you have no choice to believe (or not) in “free will” -- it’s just how you were raised and learned to deal with the world. 😉
GerrardOfTitanServer says
First-person experience, aka consciousness, aka qualia, is not free will, IMO. Having said that, qualia is horribly underdefined. For more on that see, Dan Dennett. I can conceive of the logical possibility that this rock does not have first-person experience, aka consciousness, aka qualia, like I do. I believe that rocks don’t have first-person experience. I believe that it’s impossible to “hurt” a rock because it doesn’t have awareness, feelings, a sense of self, etc. Just like I can conceive of a rock without those things, I can conceive of a person-like thing without awareness, feelings, a sense of self, etc. Aka, I can conceive of a philosophical zombie. Having said that, I believe that philosophical zombies do not exist (via rather tenuous reasoning).
If you want to see a satisfactory definition of free will, Dan Dennett is still my favorite source, and this is still my favorite video:
>Prof. Daniel Dennett: Is Science Showing That We Don’t Have Free Will?
>A public lecture by Daniel C. Dennett, Professor of Philosophy at Tufts University, entitled “Is Science Showing That We Don’t Have Free Will?”
https://www.youtube.com/watch?v=5cSgVgrC-6Y
tl;dw
Determinism doesn’t get you the “traditional” notion of free will, aka libertarian free will. Everyone agrees on this. The kicker is that non-determinism ala quantum theory doesn’t get you libertarian free will either! Libertarian free will is simply a logical impossibility. Thus, Dennett proposes that we tweak our understanding of free will, just a little, to match something that really exists in the world. Dennett proposes that the really important part of free will that really exists is how it relates to moral and legal responsibility for wrongdoing. In that sense, we can define free will as a capability that some machines in the world have, and some machines in the world do not have, and we can define duress in a meaningful way that reduces or eliminates moral culpability because it reduces the meaningful amount of free will that a machine can express. Much of the lecture is to try to persuade you to this line of thinking and show you how it’s not really that strange at all to think about free will in this way.
PS: Libertarian free will, while often called the traditional notion of free will, is not the traditional notion of free will. Separate rant.
GerrardOfTitanServer says
Marcus,
No, you do have a choice. Almost all of us have a choice. We can choose to be ignorant, and we can choose to learn about the world around us. These are definitely real choices that really can be made.
mnb0 says
A better explanation of the problem is probably this one:
https://www.academia.edu/2332556/The_Problem_of_Qualia#:~:text=The%20Problem%20of%20Qualia%20The%20existence%20of%20qualia,sensations%20of%20experience%20only%20exist%20within%20the%20mind.
It also refers to a simple version I heard as a child. Something may emit light of 475 nm, it reaches our eyes and we call it blue. But how do I know that you experience it the same way as I do? Maybe you experience it as the colour I call red. There is no way to know.
Most dualists, including the ones who use this, who argue that hence there must be an immaterial reality, make one big mistake (a specific version is called God of the Gaps): the fact that science can’t explain yet doesn’t mean science will never be able to explain. Plus accepting an immaterial reality doesn’t solve such problems by any means. As long there is no reliable method to investigate such an immaterial reality and assuming it adds nothing to our knowledge and understanding we can not talk reasonably about it and dualists should remain silent.
So the problems of consciousness and of the qualia doesn’t bother me either. It’s no more problematic than the inability of physics to explain superconductivity at relatively high temperatures (BCS theory says it’s impossible). Still nobody jumps to the conclusion that some Invisible Pink Juggler is responsible.
mnb0 says
@3 RobG: “But I’ve yet to see a satisfactory definition of either term.”
If that’s going to happen neurobiologists reaching consensus on a model of the human brain will give them. I’m not looking down on centuries of philosophy; it has done some very good work to identify the problem. But if there is one thing we should have learned from the last few centuries is that philosophy has to leave where science enters.
Rob Grigjanis says
Deepak Shetty @10:
I don’t feel the need to call it anything. Jaynes described it as an internal narration, in which we picture ourselves making coffee, or walking to the shops, or asking the cute barista out. So, ‘internal narration’ works fine for me. IN, if you like.
Jean says
If you start with the premise that consciousness is an emergent property of the physical and chemical make up of the human being (the whole body, not just the brain) then the “philosopher’s zombie” doesn’t make sense. And I don’t see any valid argument for the premise not being correct.
Deepak Shetty says
Sure but your attempt to describe it will run into the same problems that any discussion about consciousness /free will. Some people seem to think the baggage inherent in these words is the problem -- rather than we experience something that is difficult to clearly describe and universalize.
We can and do communicate and discuss with other “selves” and we seem to report common shared experience for many things. if you rephrase the question to “exactly the same thing” -- The answer is probably not. I probably do not experience love or hatred exactly the same way as you do but there probably is enough commonality such that we are able to communicate/understand.
I’d say mostly true -- But I can’t/don’t learn what you think right -- only what you show/communicate.
GerrardOfTitanServer says
Jean in 18, wholly agreed.
GerrardOfTitanServer says
Well, actually, I spoke too fast. I’m pretty sure it’s almost entirely the brain. With the right synthetic neurological input, you couldn’t tell if you had a meat body, or a metal robot body, or were a brain in a vat.
sonofrojblake says
Came here to recommend Julian Jaynes. Beaten to it by RobG @3.consider his recommendation seconded.
jenorafeuer says
My personal take is pretty much:
Consciousness is an emergent process. We see lots of cases where much more complicated processes come from extremely simple inputs. You can build a Turing machine out of ‘Life’ cellular automata, and Conway’s Game of Life can be defined by two very simple rules.
Consciousness as we perceive it is a fiction anyway, it’s a justification our brain puts together after the fact to serialize the parallel structure of what has been going on ‘under the hood’ so as to make things easier to pick apart later in memory. By the time you are conscious of making a decision the decision has already been made anyway. This is one of the big problems of how people frame the ‘free will’ argument… there is this underlying assumption that consciousness is driving the bus, when it almost certainly isn’t because lower-level processes like what is often called ‘muscle memory’ are where decisions really get made before they get rationalized by the conscious level later.
@Jean, GerrardOfTitanServer:
You can’t just deal with the brain, though, parts of our neural network are distributed. Things like the patellar/’knee-jerk’ reflex are processed and dealt with literally before the brain gets the notification that something has happened.
GerrardOfTitanServer says
Ok. I would argue / assume that it’s a very small part. I think the clearest way to say this is: You would still feel like you in a metal robot body with the right kind of sensory and motor input and output connected to the brain. If the automatic reflex was missing in the metal robot knee, then that would seem very strange to you, but it would still be you to experience that strangeness.
Jean says
I would argue that it is the whole body and even some of the living organisms that are not actually part of the human body that affect the functioning of the brain and how we feel. The gut biome certainly has an impact on how we feel and act.
I’m not certain that this can be accounted for in what can be called consciousness. But it could certainly account for how consciousness is different for each individual and how it could change over time in one individual.
Jörg says
Marcus @#12:
I would not call it convincing or scientific, but the RCC has one, ‘II. “BODY AND SOUL BUT TRULY ONE”‘ at:
https://www.vatican.va/archive/ENG0015/__P1B.HTM
😉
seachange says
I personally have experienced philosophers who won’t buy proposition 2 either with religion or without, who neverlethess will die on the hill of proposition 3.
Because they’re philosophers, and they’d be out of a job unless they can convince someone that they still have something to talk about. It’s a matter of internalized but not expressed to you or me in any way survival.
Also they give folks who have specialized scientific information and who want to compare metaphorical dick-size something meta and not reliant on information they can’t have to mutually scream around and about like the apes that they and we are without actually letting on that that’s what they’re doing, because as Jane Goodall can tell you, apes do do that. Mensa on steroids.
Heaven forfend they admit that their branch is bankrupt.
A lot of what philosophers do, independent of this particular issue, is create new blictri gavagai colorless green words and then argue the heck out of them. It’s what they do. It is the academization of the power game and social game that people do drinking at the bar and shooting the shit.
Philosophy without the modern interpretation of Natural Philosophy that we now call science is bogus.
Philosophers (“pure” philosophers, now that’s a word for you, because the pure is implied now isn’t it?) do not create new paradigms, they are descriptive only and describe them decades after they happen. They don’t create new information. They are the original fake news.
rblackadar says
@13 GerrardOfTitanServer:
Dennett is exactly the right philosopher to bring in here, but not so much his position on free will; rather, his more direct critiques of qualia and of the so-called Hard Problem, which he (I think correctly) characterizes as incoherently posed and illusory. There are literally dozens of videos on Youtube where he elaborates this position; I won’t give a link because I can’t pick a favorite.
Regarding philosophical zombies, Dennett basically says that, although this thought experiment may seem plausible at first glance, it’s not possible to define zombies in such a way that they could exist in real life. (He critique of Searle’s Chinese Room takes a similar approach, btw.) A zombie would have to somehow keep track of all sorts of things, both internal and external, also anticipate dangers, plan out responses to questions, etc., all with exactly the same skill that a normal human has — so how is this to be accomplished without some sort of internal dialogue at the very least? You can’t just say that the zombie somehow just does it — details, in this case, actually matter. There’s no way to bring coherent zombie into the world except by stretching the definition so far that we all become “zombies” ourselves. A coherent zombie would have to have some sort of “belief” that he is not a zombie, i.e. would have a strong “conviction” that he is having real experiences. So how does that make him different from, for example, me? A philosopher like Descartes might say that I could not be mistaken in my conviction, but even if I were to grant that, how can I say that the “zombie” is mistaken?
I really like Mano’s TV analogy — get all the hardware right, power it up properly and give it all the proper inputs, and voila, the picture always appears, exactly as it must. This puts the lie to the “same neurons etc.” form of the zombie thought experiment; which is why my remarks (following Dennett) focus on a less specific but still incoherent notion of zombie. The only danger I see in Mano’s analogy is that consciousness is not at all some kind of show that we watch — that would be the Cartesian Theater, a term that Dennett coined in order to point out that it is an illusion, a very appealing one but very wrong.
sonk pumpet says
No wonder this reminds one of the ontological argument for god. It slipped a “if you can conceive it then it must be possible” into the process, which is absurd on its face. I am also quite vexed that so many smart people -- often smarter than I -- find that to be an acceptable and powerfully convincing argument, or this one. I’m not convinced this consciousness is even a thing. It seems to me synonymous with the self, which to me seems like a construction of cognitive convenience rather than a genuine entity of some kind.
Rob Grigjanis says
seachange @27:
They don’t create new paradigms because that’s not their job. Their job is to interrogate those paradigms. During a period in which new paradigms are few and far between, it’s deceptively easy to dismiss their role, and many scientists do just that (Lawrence Krauss, Stephen Hawking and Neil deGrasse Tyson come to mind). ‘Practical’ people have been dismissing philosophy for thousands of years. But some notable figures recognize its importance;
https://blogs.scientificamerican.com/observations/physics-needs-philosophy-philosophy-needs-physics/
Read the whole thing.
file thirteen says
From a programmer’s perspective, I never thought that the hard problem of consciousness meant there had to be some non-material woo associated with consciousness. Having said that, the hard problem of consciousness is aptly named. Suppose I attempt to write an AI that behaves like a human mind (and if you believe in emergent consciousness, then it will be a person, perhaps even a human -- I leave that to philosophers to decide). I can attach a video camera to my AI that converts the visible spectrum to a signal dependent on wavelength. But now I have to try to write code to decrypt that signal into some sort of experience. How do I go about making my AI experience the colour Blue in the same way that humans experience Blue? Just putting in code that says “tag this signal range as value “Blue”” doesn’t even begin to duplicate what a human experiences when they see Blue.
That’s the hard problem.
To delve into this a little more, a simple difference in experience, not the hard problem, is colour blindness. When I see the colour Red, to me it’s very vibrant. But I have a friend who is red/green colour-blind, and I know their experience of Red is different to mine. Green traffic lights don’t seem very different to Red ones to them. They wear colour combinations that I recoil from, like apricot shorts with a pink shirt. But again, that isn’t the hard problem! If we were to swap eyes, we would each receive colour signals from the eyes as the other had. But would our brains then interpret those signals in a similar, or identical, way? Now that is the hard problem.
Now when we talk about philosophical zombies, I believe the difficulty is that this thought experiment muddles the hard problem with another problem, which I call the soul problem (and by “soul”, I don’t mean the religious concept of a disembodied consciousness floating around separate to the body, more the relation to the consciousness of the human that’s typing this as “file thirteen”; the thing that makes me me and you you).
So, the hard problem is that so far there’s no way to tell whether you experience something the same way I experience it. But if we take that to its extreme, I have no way of being able to tell that you experience anything at all. That’s because I can’t predict your experience from your behaviour; I can theorise on your experience, but there’s no known way to prove it’s even similar, let alone the same. It’s a hard problem! And if we’re taking it to its extreme, well, I could theorise that you don’t have any experience at all -- that you’re a “philosophical zombie” that reacts but doesn’t experience. But that isn’t really a useful thought experiment; it’s akin to dwelling on whether we might be in a simulation, whether there is a god, or whether I am the only one experiencing a consciousness -- whether life is merely an illusion, blah, blah.
Here is the point where I could go into detail about what I call the soul problem, but I don’t want to associate it with the hard problem in your mind, as to me it’s completely separate. So, another time.
John Morales says
I like Dennet’s coinage of “zimboes”.
I’m with Mano, basically.
Consciousness is clearly an epiphenomenon, not a a phenomenon — that is, not a directly physical thing. The rules are probably a bit more complex than schooling or swarming, but that’s the sort of thing it is. More like a behaviour than like a thing-in-itself.
GerrardOfTitanServer says
Again, I think parsimoniously, the best explanation and very likely true explanation is that certain physical structures cause certain certain first-person experience. I am not only the first-person experience. I am also my physical brain, and all of my decision-making power happens in the physical brain.
Having said that, it’s still logically coherent to talk about the conceptual possibility that sometimes the same physical structure leads to first-person experience, and sometimes it doesn’t. It’s not the most parsimonious explanation. It’s not a likely reality. However, it is epistemically possible. To my best understanding, this is the hard question of consciousness, and I think almost by definition it cannot be tested, and therefore there cannot be any possible answer to it beyond the simple parsimony-argument that I just gave.
file thirteen says
Gerrard #33:
It really isn’t. The hard problem is the one that I described in #31. What you are talking about is the question of whether consciousness exists in all of us. But that’s an easily answered question: the answer is yes. You can make it a point of contemplation because it can’t be proved, but just as relevantly it cannot be disproved that I’m actually an alien imitating a person antecedent to the invasion fleet. See also, Jean #18.
(Having said that, you seem to ascribe a different meaning to the word “parsimonious” than I do, so I may be missing something. It doesn’t make sense to me in the context you used it. I can only find dictionary definitions agreeing with mine, which is “frugal”)
And when you say “first-person experience”, ah, whether something is experiencing each consciousness, now that’s what I call the soul problem. There is no free will of the “I” experiencing this (“file thirteen”‘s human body’s) mind; the brain will think as it thinks, the body will act as this body acts. And if my “soul”, for want of a better word of the link between time and this consciousness it’s experiencing, were experiencing you, it would experience thinking what you think, acting how you act. But why should this (my) consciousness be experienced by anything at all? Consciousness is an emergent property of the brain, but why should there be anything experiencing this consciousness, let alone over time, which in Einsteinian terms is just another dimension of space?
Sorry, now you’ve got me started, and there’s a lot more, if you care. But I did say I wasn’t going to get into it.
GerrardOfTitanServer says
I think you’re confusing terms. A soul is something that survives after brain death. You can have first-person experience without a soul. You can also have first-person experience without free will.
file thirteen says
Gerrard #35
Agreed that “soul” may not be the best term. It does have historical baggage attached. It was the best short word I could come up with; I’m open to suggestions. But you’re not necessarily wrong when you talk about persistence after death.
Suppose my body has various parts cut away, until I have cut away all of my sensory inputs, my memories, my ability to reason. I’m then not much different to a rock. I no longer have anything to experience, but is there some sort of continuation? Then memory and reasoning and sensory inputs are restored. Do “I”, what I previously called soul, being whatever experienced the first-person experience of my mind previously, have any sort of continuation, even if there is no continuation of memory? If yes, what is the common link? It’s not any particular cell in the body -- cells are constantly replaced. So if mind is an emergent property of an operating human, and the body is broken up into constituent particles, and those particles are reformed, is there continuation? If yes, what do you call it? If no, why not?
Reginald Selkirk says
The P-Zombie argument boiled down to its essence:
Imagine that consciousness is immaterial.
Since you can imagine it, it must be true.
QED.
As for qualia, how about if we discard medieval concepts and start from the ground up with building a view on the findings of neuroscience instead? I really don’t have any patience for phlogiston apologists.
New mechanism involved in learning and memory
Reginald Selkirk says
@28 mentions the Chinese Room scenario. This is another example of a philosophical argument that was manifestly bad, that anyone with scientific training should be able to see through, and Searle was actually convinced that it proved what it did not prove.
Reginald Selkirk says
Sure. And I can conceive of the logical possibility that this rock does have first-person experience. So that “conceivable” and “possible” and “actually existent” are different concepts. This is why there are different words for them.
grahamjones says
I think that the hard problem of consciousness really is hard. I’m basically on Chalmer’s side, not Dennett’s (but not really happy with Chalmers either).
My preferred definition of consciousness is ‘subjective experience’. It is that which you can observe but others cannot. It is that which humans can observe, but which natural selection cannot act on. (“Natural selection can hear you scream, but it cannot feel your pain.”)
I think the most important subjective experiences are feelings (and not for example thoughts, or self-introspection). For myself, I can boil the hard problem down to this question: “How can anything ever have any feelings at all?”
Feelings seem to me to the core of the problem because:
(a) I don’t have too much difficulty imagining that some time in the future, we will have a theory which explains thoughts entirely in information-processing terms. We will be able to make thoughts just as sophisticated as our own, inside machines. I do not believe that we have a clue about how to explain or model or create feelings in purely information-processing terms. I do not believe that the brain has some special physical properties which enable it to create feelings which are somehow denied to other information-processing systems.
(b) Feelings matter for moral decisions. Which other animals have feelings? When does a foetus start to experience feelings? What about machines?
Jazzlet says
@grahamjones
I’m going to make a guess here that you don’t have a pet and that you don’t do much observation of eg birds in the wild. I can tell you with confidence that cats I have known have appeared to be jealous, affectionate, demanding, hostile, and various other emotions. I can also say that dogs I have owned have been jealous, and have displayed modelling another dogs thoughts to get them to move from the valued place at my side, they have showed affection, hurt, disdain, contentment, irritation, and more. The birds in my garden also display a range of apparent emotions, including enjoyment of teasing the dogs, repeatedly teasing the dogs, and upping the ante when the dog gets used to the first taunting.
Holms says
I find the very concept of the philosopher’s zombie internally incoherent.
What does it mean to be awake, to have an internal state, to have attention to focus on things… in the absence of conscious experience? To my mind, this makes the entire argument a non-starter, but even if we took it to be conceptually sound, I think the argumentation that follows is even worse.
Why the fuck is a philosopher confusing ‘conceivable in thought’ with ‘physically possible’? I thought they were the experts at differentiating between such concepts. And yet the bad reasoning does not stop there.
If it is possible to have two people with identical everything, save only that one has a mind and the other does not, then yes some other element is needed to explain the difference in consciousness. But why would a real being, somehow perfectly identical to another being, not have the same mind function? The concept of the philosopher’s zombie was formulated with the assumption of non-material minds already baked in.
Reginald Selkirk says
I agree completely with Jazzlet @41 that animals clearly have emotions, feelings, consciousness, etc. Any argument about consciousness based on human exceptionalism is bound to fail.
I don’t like your definition above and I don’t think you can make it hold up. Imagine a fish and its ‘subjective experience.’ Maybe a particular fish is more suspicious of worms and flies floating by his favorite hole. Maybe he is naturally less trusting, or perhaps he has witnessed other fish going for the bait and suffering consequences. Can you explain why this fish’s failure to take the bait does not qualify as “subjective experience”? Or can you explain why it would not be subject to natural selection?
Grant Castillou says
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
xohjoh2n says
@41 Jazzlet:
I think you misread grahamjones’s intent there. I don’t think he’s saying “animals don’t have feelings”. I read it instead as saying: once we have our underlying theory of how consciousness (“feelings”) arises, then we will be able to determine scientifically the extent to which it arises in other animals, foetuses, machines etc., and *that will determine the moral calculus of how we ought to treat those others*, rather than just assuming and making things up as we do at the moment.
anat says
grahamjones @40: Of course evolution can act on pain. The ability to sense pain is an evolved trait that can protect animals from getting into harmful situations. And of course our capacity for certain subjective experiences is evolved. Different species of animals have different wirings of their vision systems in ways that are adapted to their environments and lifestyles such that each species has different sensitivity to different kinds of visual stimuli. Our emotions are also evolved, and have to do with interpreting our environment, especially (but not only) our social environment and creating motivation. I’d say any animal that is capable of making any kind of choice and act on it must have some kind of feelings because we know that humans who have deficiencies (usually due to injuries) in their emotional circuits have serious disabilities in making choices (because if nothing feels like anything there is no motivation to prefer one condition to another). Such people go through their lives by adhering to a strict routine and have difficulty responding to anything out of the usual. (This does not mean an AI that makes choices has the same kind of feelings, because AIs are not the product of the same evolutionary process that created us, and thus its motivation isn’t necessarily equivalent to ours -- but of course it may be.)
GerrardOfTitanServer says
To file thirteen
To your questions, I also don’t know. Further, I suspect that the questions are not even meaningful because they rely on terms and concepts are fundamentally ill-defined because they rely on dualist notions of consciousness and human agency.
Reginald Selkirk in 39
Yes, and? I clearly and explicitly stated I thought that the idea was false. I merely said that it’s coherent, aka epistemically possible.
Reginald Selkirk says
More productive than rehashing medieval concepts:
Researchers Discover How the Human Brain Separates, Stores, and Retrieves Memories
Reginald Selkirk says
I didn’t say I was disagreeing with you, I was taking it in a different direction. This notion of “If you can conceive of it, it must be possible” is the purest form of hogwash.
GerrardOfTitanServer says
Reginald Selkirk
Sounds good.
grahamjones says
anat @46 said
Natural selection cannot act on subjective experience. That’s just a matter of definitions. Of course the function of pain as a warning system or the function of emotions as motivators can be selected for. But we can make quite simple machines which have warning systems and we can give machines motivations.
AI researchers give motivations to their machines by doing things like:
* Supplying a problem which the machine is supposed to figure out how to solve
* Supplying examples of input and output from which the machine is supposed to learn how to respond to new inputs
* Providing a utility function (in the sense of statistical decision theory) which the machine is supposed to optimise
* Providing positive and negative reinforcements when the machine interacts with the environment in particular ways
None of this requires the machines to have feelings. Or perhaps or does. Perhaps AlphaFold has feelings. We don’t know.
Well, that’s a point of view that I cannot disprove. Note that bacteria are capable of making choices and acting on them.
Thanks to xohjoh2n @45 who interpreted my comment about animals correctly. Perhaps bacteria do have feelings. Perhaps dogs don’t. We have no way of telling.
John Morales says
grahamjones:
What? Are you blind?
Look: I’ve had the pleasure to have doggy companions over the decades, even if you haven’t.
They get sad, they get happy, they get afraid, they dream, they get jealous, and so forth.
Live with one long enough, it’s damn obvious. They sure get expectant and joyous.
Walkies. Eating time. Beddies. All that.
In short: you know how we can tell? By empirical means.
(We might not know the specifics of their ideations, but there they incontrovertibly are)
Now, take cats. Birds. Hell, take fish.
Them too.
John Morales says
BTW, should anyone be unfamiliar with what I think is an excellent bit of SF over this issue, there’s Blindsight.
(Maybe self-awareness is not that great, in the overall scheme of things)
—
PS avoid the sequel. ‘Tis dismal.
John Morales says
[PS as I write this, my dog is hinting. Time for his last walkie of the day, a chance for a sniff and a pee and who-knows-what.
Good dog that he is, he’s not in my face. He’s not whining or harassing at all.
He kinda just exudes hopefulness, expectation and, well, patience.
How, you no doubt wonder, could I perceive whether or not a dog has feelings?
Just lucky, I guess, that I’ve been able to share my life with doggy companions]
Reginald Selkirk says
How conveeeenient.
Consciousness is material. That’s just how I am defining things. Discussion over, I win.
Tabby Lavalamp says
Considering all the different ways different brain injuries can change a person you’d think it would be clear now that our brains are us.
grahamjones says
I really should have learned by now. Do not say things like “perhaps dogs don’t have feelings” when dog owners might be listening. It causes a substance to be released in their brains which all too often results in them barking at me.
Last summer I was talking to a friend about consciousness and AI and feelings, and so on, and I made a statement along the lines of “perhaps babies don’t have feelings”. I intended this as a statement about how ignorant we are about the nature of feelings, but I was a bit nervous that she would take it the wrong way. To my surprise she was quite happy with the idea and took it much further than I would. She thought it very plausible that babies do not have feelings, but only know how to behave in order the elicit the behavior that they need from us. She is a loving mother and grandmother.
I think that both babies and dogs have feelings, but I only think it, I do not know it. It is seems possible that some other animals have more intense feelings than we do. It is not obvious why intensity of feeling should correlate with intelligence.
I don’t understand Reginald Selkirk @55. When I said “Natural selection cannot act on subjective experience. That’s just a matter of definitions” I was not intending to argue against a materialist position. All that matters to natural selection is morphology and behavior. An organism cannot increase its chances of survival by feeling things, unless feelings have causal effects. But that seems to a dualist rather than materialist position.
friedfish2718 says
A commentator writes: “…that philosophy has to leave where science enters.”
.
What the commentator and -- unfortunately -- several scientists quoted in a Scientific American article forget is that science is just another philosophy amongst many other philosophies. Before Isaac Newton, science was called Natural Philosophy. Current scientists are philosophizing about nature. Science is derived from a philosophy called Pragmatism which also gave birth to economics. What Newton added to Natural Philosophy is Calculus (math/logic) and Experimentation (pragmatism).
.
Mr Singham’s philosophy is called materialism, the doctrine that nothing exists except matter and its movements and modifications (Oxford Dictionary). Mr Singham emphasizes “according to laws of nature”. Do we have a most firm grasp of the “laws” of nature? Not yet. First, matter was considered as continuum, then matter was considered atomistic; now matter is considered as having both particle-like and wave-like attributes. Does matter have only particle and wave attributes? Science does not know yet. Matter may have particle, wave, spirit properties! The Holy Trinity!!! Panpsychism or panconsciousness or speculative realism endows all matter with a form of consciousness, energy and experience.
.
Mr Singham makes assertions he cannot prove:”It does not survive the death of my brain”.
.
Mr Singham seems exhausted, cynic (an exhausted cynic?):”There have always been such problems in science and there always will be.” I agree that there will never be the end of Physics and I base my thought on the unreasonable usefulness of math in physics. No branch of math is found to be useless for physics. Math is not physics. Physics is not math. Math is proven to be a never-ending endeavor via Godel’s Theorem and Church’s Lemma. Given that current physics relies so much on math the proposition that there will be no end to physics is understandable.
.
On the question of consciousness, why the fixation on the zombie paradigm by Mr Singham and commentators? Break down the problem into simpler parts. non-Humans have consciousness: dogs, fish, snails, insects, etc.. The current common assumption is that the locus of consciousness is the brain. So, do brain-less organisms have no consciousness? At what point in evolution did consciousness emerge? I propose that consciousness appears alongside life. As life evolves, consciousness evolves. Can bacteria be empathetic? Can plasmodia commit suicide? What is the emotional range of insects?
.
There are many outstanding questions that traditional scientists cannot handle currently. Is it arrogance that lead some scientists to state that “philosophy has to leave where science enters”? Scientists are just philosophizing about nature and it is profitable for said scientist to learn from the insights from other philosophy branches.
.
Mr Singham’s materialism is more ideology than philosophy. Ideology puts blindfold on Mr Singham on several debating points such as the Evolution Hypothesis and Genesis of Life on Earth. I say “Evolution Hypothesis”, not “Evolution Theory” since no experiment has been done (yet) to demonstrate a unicellular organism having a progeny being multicellular obligate organism. Do I think one can say “Evolution Theory” in the future? Yes. On the origination of life, Mr Singham cannot provide constructive intuition from his materialism bag and is blind to any intuition from other lines of thought such as Intelligent Design. Yes, creationists have glommed onto Intelligent Design. And yet Intelligent Design has arguments that can fit into the materialism framework.
.
In Math, many numbers are defined and very large. So large that it will takes billions of years even with quantum computers; so the exact evaluations of said numbers will be out of reach of humanity. That algorithms to calculate these numbers have been proven to be correct is good enough (for math). However for Physics, mathematical proof is not sufficient (math is not physics!), experimental proof is necessary.
Holms says
“Panpsychism or panconsciousness or speculative realism endows all matter with a form of consciousness, energy and experience.”
Got any evidence for that?
“Mr Singham makes assertions he cannot prove:”It does not survive the death of my brain”.
We have no evidence that anything survives the death of the brain, and plenty of evidence that the brain is the source of all thought, awareness etc., making Mano’s comment by far the most reasonable.
“Mr Singham seems exhausted, cynic (an exhausted cynic?):”There have always been such problems in science and there always will be.” I agree that there will never be the end of Physics and I base my thought on… [verbosity]”
You reply to Mano’s statement would have been better phrased as “I agree”.
And so on.
GerrardOfTitanServer says
He admitted to being a certain kind of creationist. Keep that in mind when engaging.
tuatara says
I know I am late to the party (quite normal for me) and guess that you lot have all moved on, but there was this recent report on the BBC --
https://www.bbc.com/future/article/20211126-why-insects-are-more-sensitive-than-they-seem
(There are links to relevant papers in the original link if you want to dig deeper- I will not spam here).
It seems that emotional response evolved very early which means that we are not a special species, as if I ever needed convincing of that.
John Morales says
tuatara, a lot of people use ‘sentience’ and ‘sapience’ and ‘consciousness’ interchangeably.
‘sentience’ quite literally means to feel feelings, which can’t be done without experiencing feelings.
Like you, I become irritated by people who dismiss non-humans as incapable of abstract feelings, since ‘sapience’ (the relevant factor) is held to be unique to humans.
That is quite debatable, but of course the answer depends on whether one uses a functional or a philosophical definition. And, yes, humans are the best at it on the planet.
So far.
Anyway. Point being that arguing humanity is somehow necessary for sentience (feeling of feelings) is not a viable proposition given our external reality.
(Through a glass darkly and all that, but then Asimovian relativity of wrong)
tuatara says
John.
I have found anecdotally that “human exceptionalism” is strongest amongst creationists, expressed as a god given right to subjugate the earth to our free will.
As such, I feel, it is not a good position from which to start discussions on the sentience of other earthlings.
I just rescued a juvenile huntsman spider (from my kitchen to the harsh word outside -- did I rescue it or doom it?).
It was difficult to tell which of us was most frustrated -- me trying to coax it onto my hand, or it as it kept jumping off again as soon as I managed to start my spider liberation run outside.
One of my neighbours was recently bitten by a red-bellied black snake. His young dog had cornered it in the house. It must have been terrified as it lashed out, biting and invenomating the dog in the mouth, then biting his hand three times as he tried to separate it and the dog. Sadly the snake was killed in action (I wish people knew that all snakes are protected species here in Oz ).The dog died too, and the neighbour was hospitalised overnight.
John Morales says
tuatara,
I’m not into karma, much as I appreciate the esthetic of it, but you gave it a chance from your higher perspective. As (I think) you too would wish to be given by an entity from a higher perspective with powers of determination.
(Not that much to feed from in the kitchen, but much in the world outside)
—
In passing, there’s a SF opus by Gregory Benford (Galactic Center Saga) where a powerful moment (for me) was when a protagonist character sees a bug and acknowledges the kinship of biological life in an otherwise unnatural world.
tuatara says
John.
I have not read those. Thanks for the pointer. Like many I am in need of a bit of escapism right now so will hunt them down.
sonofrojblake says
I’d draw a distinction between the following:
1. ability to move away from negative stimulus. Even primitive living things with even basic motility are often capable of this.
2. ability to *perceive* negative stimulus as “pain”. Needs, I think, a more complex nervous system and at least a brain to be processing the perceptions in -- so jellyfish are out, I think.
3. ability to develop behaviours akin to instinctive avoidance of pain-causing stimuli, without necessarily having any understanding of why. I’d put reptiles about this level.
4. ability to observe and analyse a novel situation and perceive the *possibility* of pain, and avoid it. I’d definitely put pretty much all mammals at this level, and probably some birds too. Note: wouldn’t stop me eating them if they’re tasty.
5. ability to have a theory of mind, i.e. an understanding that OTHER organisms experience pain, and make even minimal efforts to minimise that pain. Pretty sure great apes have got this. Pretty sure some humans haven’t. No idea about octopuses or dolphins.
6. ability to have an internal monologue, sense of self and others, and capability of complex introspection.
I’d say that level 5 is the first one that qualifies an organism as “sapient” (as in “homo sapiens”). “Sentient” I think can be met at level 4. Level 6 is where the hard problem comes, because there’s in principle no way to prove you’ve gone beyond level 5, other than saying “I have”, and hoping people believe you.
Jazzlet says
sonofrojblake @66
How would you define the spaniel in this incident?
I was sitting on a two seater sofa with the german shepherd, the spaniel comes along and barks at the GSD to get her to move out of the favoured place, GSD ignores her. The spaniel goes round to the side of the sofa on the side I am sitting, where the GSD can not see what is happening through me and the arm of the sofa, and noisily solicits strokes from me; this causes the GSD to leap out of the favoured spot to come and see what is going on, where upon the spaniel darts intto the favoured spot and settles down happy. This happened more than once, which did lead me to question the supposed stupidity of the spaniel and intelligence of the GSD.