This is a guest post by Joshua Stein, a doctoral student at the University of Calgary and @thephilosotroll on Twitter.
Peter Godfrey-Smith’s Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness is, like it’s subject, a strange animal. It is accessible to a broad and general audience; it also deals with a lot of technical literature in comparative psychology and philosophy of mind. I think the book can be deeply enjoyable for a broad lay-audience, but it is even better with a little bit of background and explanation of where Godfrey-Smith fits into the literature and what he’s saying about consciousness. I want to provide some of that background, to illustrate why this book is so interesting and show some colleagues in philosophy and psychology why the book should be regarded as a philosophical success.
There are some things about mind and consciousness that Godfrey-Smith takes for granted. The first is that we can study and discuss consciousness as an empirical issue. Most folks are probably familiar with the claim that “we can’t study consciousness” for some reason or other. The claim comes up an awful lot, even in some philosophical literature. (The most noteworthy advocate is the disgraced Colin McGinn.) I won’t get into the objections to this position, but it is basically set aside by most philosophers.
There are two approaches to evaluating minds; one is to look directly at the nervous system and extrapolate about how it works from the internal mechanisms, while the other is to look at how the organism behaves in the environment. There’s a long history around these two approaches, often regarded as in tension; it is increasingly common, though, to use both methods in order to a build a more satisfying theory. Godfrey-Smith uses both throughout the book: he’ll often discuss the ways he sees octopuses behave, and then shift to talking about mechanisms in the central and peripheral nervous system.
Godfrey-Smith uses the book as an opportunity to offer a rich, and technically sound, story about consciousness. There are two features that he discusses at length in the book, returning to them over and over, and these two features are pretty prominent in modern theories of consciousness. The first is that consciousness involves the integration of different sorts of sensory information (pp. 88-90); the second is that consciousness involves the temporal ordering of events (91-92), and allows those orderings to be made available in action.
Godfrey-Smith writes. “I see ‘consciousness’ as a mixed-up and over-used but useful term for forms of subjective experience that are unified and coherent in various ways.” (97) Unlike many contemporaries, Godfrey-Smith doesn’t offer a specific theory of consciousness; however, he does involve existing theories and shows how they play a role in discussing consciousness in radically different minds; obviously, in the book, he’s concerned with cephalopods.
There are some other features that show up in Godfrey-Smith’s story of consciousness that make the story so satisfying, but before I get to this, I think it is useful to note that the two prominent features play a part of an old philosophical tradition. The Anglophone philosophers David Hume and John Locke each came up with stories about what consciousness is that involved rich experience and temporal ordering, respectively.
For Hume, consciousness was about the vivid and integrated character of experience; an auditory experience isn’t a two-dimensional thing. It has pitch and timbre and tone, and there’s noise that has to be filtered out. Part of what it is to have a conscious experience of a piece of music is to experience the different dimensions of that piece, all laced together into a multidimensional sensory experience.
For Locke, consciousness was about the autobiographical constitution of identity; people are continuous over time and have a unified psychological story that extends back into their pasts, and includes certain features of possible futures. This gives us something like the temporal ordering feature.
Godfrey-Smith isn’t committal to any such view being decisive. Rather, he’s open to the possibility that both of these things are true of and involved in facilitating consciousness. His story rather illustrates that many of the inherited theories (now far more technical and closely aligned with certain findings in neuroscience and cognitive psychology) are mutually reinforcing in valuable ways.
Because Godfrey-Smith isn’t committed to a particular theory about consciousness, he’s open to pointing out how different theories illustrate different features of consciousness. One instance, present from the very beginning of the book, is the role of attention; an organism that attends to a feature of its environment for a period of time illustrates both features of consciousness (because they perceive the feature over time and integrate information about changes in that feature). He notes that this is common with octopuses who see and attend to him when he is diving to watch them; it comes up regularly in his anecdotes.
Initially, I wondered if Godfrey-Smith considered that attention is instrumental in a popular theory of consciousness (actually, the one I more-or-less subscribe to). He invokes things that look curiously like classic tasks in joint-attention (57-58), only performed by octopuses instead of children or chimpanzees; the giveaway that he’s taken this into consideration is his invocation of Jesse Prinz (91-91), whose 2012 book The Conscious Brain articulates and explores the attentional theory of consciousness.
Another feature is embodiment. While Godfrey-Smith expresses skepticism of a certain view of embodiment (74-75), he also gives a lot of the stock arguments for why embodied cognition is so important. For example, cuttlefish can’t process color visually (due to a lack of individuation in light receptors used for color vision) but still respond to differences in color in their environment through features in the skin; a version of this approach (though for object-vision and not color) has been used to develop vision substitutes for the blind. (80-81) Even as a skeptic about certain strong views of embodiment, Godfrey-Smith shows how many theories that focus on embodiment as something that shapes conscious experience get certain bits right.
I could go on with the various different features that Godfrey-Smith picks up and illustrates, but at that point I would risk summarizing a huge portion of the observations he makes in the book; it’s worth reading for yourself to see how these different elements fit together and provide a broad and interesting theory of consciousness.
The last point I want to make, which is of special interest to readers here familiar with PZ’s various criticisms of evolutionary psychology, is something that I think Godfrey-Smith does particularly well.
One way of criticizing a lot of the literature in evolutionary psychology is that it puts together a specious “just-so story” about how certain features of the brain (and therefore the mind) evolved in ways that are not as responsive to things like the environment, interaction with conspecifics, and other features that we know (from developmental psychology) play a huge role in how any particular member of a species develops.
Claims about the evolutionary history of a particular behavior, for example, and selection pressures influence the development of tools and their prospective role in reproductive success (just google “sexy handaxe theory” if you’re wondering what I’m talking about) are difficult to evaluate, for both philosophical and scientific reasons, but Godfrey-Smith’s constant focus on the contemporary role of the various functions for octopuses (for example, the way that attention helps them to interact with their environment, or the way that peculiarities in the peripheral nervous system help in hunting) makes the story much more plausible, and much easier to evaluate, rather than focusing on the buried secrets.
Owlmirror says
I thought there was a hypothesis was that cuttlefish were using chromatic aberration to see (some) color.
Stubbs AL, Stubbs CW (2016) Spectral discrimination in color blind animals via chromatic aberration and pupil shape. Proc Natl Acad Sci U S A. 2016 Jul 5.
Although there doesn’t seem to be much positive followup on that, going by cited works.
willj says
I see many potential correlates of consciousness, all objective, but nothing explaining subjective experience – unless perhaps you agree with Dennett et al that it’s an illusion. It’s not easy to explain subjective via objective means. Scientists and philosophers have been trying for hundreds of years.
philosotroll says
@owlmiller, it seems totally plausible that there are alternative ways of explaining cuttlefish responses to color; my point there was that it was a bit weird for Godfrey-Smith to appeal to embodiment to explain that feature immediately after pooh-poohing embodied cognition explanations, generally.
@willj, I suppose that some of this turns on what you think will count as an explanation of subjective experience or phenomenal consciousness. On my view, being able to explain the various constitutive features that go into understanding subjective consciousness (in the train of Hume, and later the French phenomenologists like Merleau-Ponty) then you wind up with a pretty satisfying theory; this is what I think that Godfrey-Smith (insofar as he follows Prinz, Tononi, and Baars) does pretty well. If you have a really high bar, like Chalmers or McGinn, then it’s not likely to be satisfying.
madtom1999 says
Developments in AI make me think that consciousness is a product of, well animation. In order for an animal to move it needs a self-model to practice with in order work out what options it has and which ones will benefit it most.
You’d have to read up on AI to see what I mean but its nothing clever – its a necessary emergence of any autonomous control system – you need to know what you have, how it works in environments and the ability to run simulations of yourself. Evolution has led us to have a consciousness that actually includes models of societies that we can educate and experiment with.
Its quite amazingly simple when you look at it from the right angle.
Brian Pansky says
A fellow Calgarian! (Ya I saw your other post too, but I never got around to saying anything)
@4, madtom1999
Yup.
I remember noticing in my mechatronics class that the simple diagram of a feedback loop controller is a lot like our nervous system.
philosotroll says
@madtom1999, this isn’t an implausible theory, and functionalists (in the tradition of William James) have been saying things very much like this for a long time. Godfrey-Smith makes this point, and I think quite well. It certainly explains what the evolutionary impetus (selection in favor of these developments) for consciousness and the various interrelated features was and is. However, it really only picks up on one dimension of the question.
That is: It explains what consciousness does, its behavioral role, but not the various technical reasons of how it happens in individual cases and how it has the various odd properties that it does. I think that view is basically right, but it is also an incomplete story, and one of the things I like about Godfrey-Smith is that he does give a more complete story, including this functionalist bit.
KG says
willj@2,
I don’t read Dennett as saying consciousness (or subjective experience) is an illusion, but that some of the properties we tend to assign to it (summed up in what he calls in Consciousness Explained the “Cartesian Theatre” picture of consciousness) are illusory, and that it’s not an all-or-nothing phenomenon – see here for a recent interview with him. I’m inclined to ask you whether you believe in the possibility of what he calls “zimboes” – entities which show all the same behaviours as real people – including the signs of pain, pleasure, joy, fear, anger, pity, etc. in appropriate contexts, claiming to have conscious experience and to be conscious, expressing puzzlement about how consciousness can arise, etc. – but in fact having no consciousness or subjective experience?
paulparnell says
I dunno but it seems he is making the same mistakes that all others are making. He describes the operation of an algorithm and then confuses the fact that he experiences the operation with an explination of consciousness. The algorithm does not need consciousness in order to work nor does assuming consciousness help to understand the function of the algorithm.
willj says
@philosotroll
Hmm, Tononi information structures seem completely objective to me, although both he and Koch seem to be headed towards some form of panpschism for phenomenal consciousness. Koch surprised me, since he used to be such an arch-physicalist. McGinn I don’t like. Chalmers is good at defining the consciousness problem, but he’s a dualist. Lately he’s been trying to eliminate the interaction problem by resurrecting a kind of Von Neumann QM with McQueen? where consciousness causes collapse. According to them, consciousness abhors a superposition and is never in one itself. Not too sure about all that.
@KG
I can believe in the possibility zimboes or zombies (especially computer programs or machines), but I don’t think either one would fool anyone for long. Zimboes might fare a little better, but I don’t see how they could reliably read any fiction, or have a theory of mind that could truly make sense of the inner life of the characters. Nor could they write fiction – even bad fiction.
consciousness razor says
willj:
I don’t understand. What exactly makes you think you have it and they don’t? What if you can’t truly make sense of the inner life of fictional characters, etc., and you merely think you can and comfort yourself with that thought (just as they would)? Sometimes I can’t even make sense of myself … is that a bad sign?
As KG said, they (I prefer the term “p-zombies”) display all of the physical evidence of consciousness as we do. The hypothesis is that they do write fiction (of whatever caliber), and they do everything else like us, including being confused/puzzled about how they are (apparently?) conscious. However, because mere physics supposedly isn’t sufficient for that (so the dualist story goes), it’s stipulated, based on no evidence whatsoever, that in fact they aren’t conscious. Maybe it wasn’t clear what you were signing up for…. But if you were clear about it, then you wouldn’t be saying they’re not capable of the same things as we are. The real worry here is that, by this reasoning, you don’t have anything to make the suggestion that in fact we are conscious, despite all of the evidence to the contrary, because any of the things you could consider are exactly the kinds of phenomena that the natural sciences can attempt to study and explain.
And besides, I really don’t know how you could derive the conclusion that these hypothetical entities can’t do this or that…. Wouldn’t that be an empirical question? You couldn’t have had evidence to support it, and if you just pulled it out of a convenient orifice, that doesn’t give me any confidence that we ought take such claims seriously.
John Morales says
consciousness razor makes a good point — the claim of hidden essence that’s undetectable is essentially the same argument Catholics make about the Transubstantiation.
philosotroll says
@paulparnell, I don’t have time to delve into the various cartesian issues implicit in the comment, but I think one should worry less about this in the case of Godfrey-Smith, especially relative to contemporaries like Dennett and Fodor. His theory isn’t nominally computational, and if you want something that gives the “thinking about operations” feature, he does acknowledge higher-order thought theories in the discussion.
@willj, I agree that Tononi’s view does wind up looking very objective, on the surface. However, I think that when one starts to look at the ways that the information is integrated, you get something that looks a lot more Humean than one might initially expect. I think Prinz teases this out really well in The Conscious Brain (though he doesn’t think that information integration provides an adequate theory), albeit in much more technical terms.
I will say that the supposition about granting p-zombies are possible while denying that they would fool people for very long cuts against the spirit of the thought experiment. The p-zombies are supposed to be behaviorally indistinguishable; why not just deny that “behaviorally indistinguishable” p-zombies are possible, because phenomenal consciousness is behaviorally salient. (@consciousness razor points this out well.)
I think Chalmers’ views on consciousness make for some really interesting metaphysics, but wind up with some really weird and empirically indefensible philosophy of mind. But that’s a separate set of issues I can address if he writes another book on mind (which I don’t think he will).
Brian Pansky says
willj, you’re mixed up. The question isn’t “how long could they fool people?”. The thought experiment proposes that they would be able to convince everyone forever, because they would be behaviorally indistinguishable from regular people. But somehow the zombies wouldn’t have any subjective experience. The question is: is that possible?
I think it depends on what is inside them, making it work (it appears you do too, so hopefully you’ll agree with me). Obviously you think if something has the insides you do, it is like you.
If inside, there was just a recording that took no inputs from the outside world and just outputted muscle movements etc, this would not be conscious or experience anything. It would be no different from any tape recording (you could also compare it to cartoons on a screen, which do fool our social perceptions). In order for this recording to fool anyone interacting with it, the creator of it would need precise knowledge of the future, so this is incredibly difficult to accomplish!!! Probably even literally impossible for any significant length of time. By the way, this would be an example of a mechanism using “feed-forward control”.
But what about feedback control? As I said before: the human nervous system fits the model of a feedback control. There’s inputs, memory and learning, pattern recognition, connection between related things, comparisons, and selection of outputs based on goals. Something like that. And it all happens in real-time, it’s not merely a recording set up beforehand.
I think intuition can still fail here. Because everything physical is ultimately more like unintelligent feed-forward control when you look at the small parts. So the difficulty for the untrained imagination is figuring out how one becomes the other when you build it (and indeed, how it could become a seemingly whole and undivided experience like ours). Imagination and intuition can also fail to understand that an artificial intelligence wouldn’t be like a recording, it would have to use feedback (and thus goals and comparisons) in an extremely detailed fashion. Pattern recognition, semantic relations/connections, lots of stuff. Everything would have to come together to make that coordinated wholeness of intelligent behavior.
Some good further reading:
http://holtz.org/Library/Philosophy/Epistemology/Zombie%20conceivability%20-%20Cottrell.htm
willj says
philosotroll@
Chalmers claims that aspects of his theory are testable. He does think it’s a long shot.
conscious razor@
I assert that I am conscious and I don’t require that you take me seriously. Consciousness is single most important fact of my life, and the only reality I’ve ever known. I experience all kinds of things – qualia are everywhere. And yet you appear in my consciousness and doubt it’s existence.
I don’t think a zombie or a p-zombie could read a novel and act like it understood it because without subjective experience they would have no basis or subjective knowledge for making sense out of the qualia or inner life in the characters, which most writers quite liberally include.
Brian Pansky@
You’re right. I should have said that I don’t believe zombies are actually possible. They can have no deep knowledge about qualia or subjective experence, and that’s bound to show up in their behaviour sooner or later.
consciousness razor says
willj:
It’s certainly true that you’re conscious. Nobody would have any good reason to deny that. What’s being denied is instead that you can characterize it independently of physical events happening in the world, which affect what kind of empirical evidence you have. The trouble is that you can’t coherently characterize p-zombies as lacking consciousness, by using some criteria independent of the empirical evidence which (by hypothesis) suggests they are conscious.
In other words, unless dualism is true, there isn’t any special non-physical sauce that you can slather on people,* which distinguishes them as conscious entities. That implies, although of course we don’t understand how to do it now, that it does all reduce to physics. So, it could in principle be explained as such, meaning that your concern expressed in #2 is not a principled one but a practical one that remains open.
*On people, as opposed to, for example, “computer programs or machines,” as your parenthetical remark suggested in #9. But for that matter, it could be any other type of entity. You could dream up p-zombies, aliens, or much more mundane examples like non-human animals. Whether they’re conscious or not, they don’t lack our special sauce, because we don’t have a special sauce to begin with.
That’s precisely what I’m not doing. Read through the comments again, more carefully this time. You are conscious — I have no doubts about that, none whatsoever. Why do I think you are conscious? Because I can easily infer it from the physical evidence. Why do I have that evidence and not any other sort? Because that’s all it consists of in reality.
You’re presupposing that they could lack subjective experience, but you’re talking about an entity which does display all of physical evidence of having subjective experience, just like you and I do. What could there possibly be about them which implies they lack something that we have? The only way to make sense of that is with a dualistic theory, so as soon as that’s reject all of this bullshit goes out the window.
John Morales says
willj above:
Ahem. #12 clearly rebuts this contention, and you have not addressed the rebuttal.
(Are you thinking the Voight-Kampff Test in Blade Runner?)
John Morales says
PS https://www.youtube.com/watch?v=Umc9ezAyJv0 for the snippet
willj says
(sorry for not responding, I’ve been sick)
consciousness_razor@
Ok, so maybe you’re an identity theorist – mind and brain are identical. The standard argument against it is that things are only identical if have exactly they have the same properties, but physical and mental things have completely different properties, even though they’re correlated. When scientists show exactly how the physical can give rise to the mental, then maybe I’ll take that point of view. But they’ve been trying for hundreds of years. And If you go to some kind of epiphenomenalism or emergence, well that’s kinda like someone claiming that invisible fairies emerge from their cell phone. They can see the fairies that no one else can. But from an objective standpoint, they would be saying that something unobservable emerges from something observable, a claim impossible to prove and not true for any other emergent properties.
“In other words, unless dualism is true, there isn’t any special non-physical sauce that you can slather on people”
Well, a significant number of people are also moving toward panpschism, a kind of special sauce. I don’t really like it. The combination problem may even be harder than the hard problem. I’m actually undecided. There’s not enough evidence for anything yet.
“You’re presupposing that they could lack subjective experience”
Then your violating the definition of a zombie, as pointed out by #12, 13. Zombies are just like us, but lack subjective experience. Computer programs and machines would also not qualify as zombies. Even with your conscious zombies, a zombie would be impossible, and just like us.
My subjective knowledge arguments are similar to a more complex version of the Mary’s Room argument, but actully better, since Mary was conscious and zombies are not.
John Morales@
Well yes, that’s what I’ve done. I believe that zombies are not possible, and I’ve given the reasons why. If Chalmers believes they’re possible, then I think he’s wrong. Yes, I think Voigt-Kamp is a good test for subjective knowledge in androids and perfect fakery in zombies.
KG says
willj@18,
But until very recently, they’ve effectively been doing so blind – without the tools to either look at what goes on in the brain in any detail, or to model it computationally (rapid progress in both areas continues).
Not at all. What consciousness razor (and I) are saying is that mentality and consciousness are observable.
KG says
Further to #19,
Indeed, you actually agree with us that consciousness is observable, when you’re not confusing youself:
willj says
KG@
“When scientists show exactly how the physical can give rise to the mental, then maybe I’ll take that point of view. But they’ve been trying for hundreds of years.”
“But until very recently, they’ve effectively been doing so blind – without the tools to either look at what goes on in the brain in any detail, or to model it computationally (rapid progress in both areas continues”
Yes, it’s called “promissory materialism”. I doubt the hard problem is solvable via purely physical means. Everything in science is objective. Physical processes, no matter how complex, have objective properties and that’s all. You can’t throw a bunch of objective matter, forces, and fields together and declare that the invisible subjective magically emerges. Well… you can, just won’t do any good. There’s no way to get there from here. Subjective properties are just not in the language of our basic physics and I don’t see that changing any time soon.
There are also more subtle problems, like time perception. In physics there is no such thing as the present. A real physical Now would be a serious problem in many areas of physics, so physists have deemed it subjective. But a moving Now is intimately associated with consciousness. We use the word Now everyday.The problem then becomes: how can the brain generate the perception of a preferred moving now from a static timeline. Wouldn’t it be static too? Even putting the hard problem aside, to my knowledge no one has even begun to explain this problem.
“But from an objective standpoint, they would be saying that something unobservable emerges from something observable”
“Not at all. What consciousness razor (and I) are saying is that mentality and consciousness are observable”
“Indeed, you actually agree with us that consciousness is observable, when you’re not confusing youself:
I don’t think a zombie or a p-zombie could read a novel and act like it understood it because without subjective experience they would have no basis or subjective knowledge for making sense out of the qualia or inner life in the characters, which most writers quite liberally include”
Sorry you’re having trouble understanding it, but that’s not true observation. You’d be making an inference about someone else’s subjective experience that you can’t (in theory) back up by cutting open their head and looking for it. Note that you can do it with electronics and software. With the right equipment you can trace the electrons. And yet some say that consciousness is like software running on the brains hardware. Not quite.
John Morales says
willj, you are now contradicting yourself in your attempt to rebut KG.
This after you just quoted yourself being quoted thus (my emphasis): “I don’t think a zombie or a p-zombie could read a novel and act like it understood it [because blah]”.
Do you not see how you’ve made the claim you could infer the presence or lack of presence of someone else’s subjective experience by observing their actions?
John Morales says
[crickets chirping]