The story of evolution-1: The power of natural selection

We are rapidly approaching 2009, a year that marks a major scientific milestone that is going to be commemorated worldwide. It is both the 150th anniversary of the publication of the landmark book On the Origin of Species that outlined the theory of evolution by natural selection, and the 200th anniversary of the birth of its author Charles Darwin.

Darwin’s theory represents arguably one of the most, if not the most, profound scientific advances of all time, ranking well up with those scientific revolutions associated with the names of Copernicus, Newton, and Einstein. And yet it is widely misunderstood, or more appropriately, under-understood because most discussions of it remain on too high a level of generality, enabling critics to make statements about the theory that are not valid but yet seem plausible.

In order to create a better awareness of what the theory involves, today I will begin an occasional series of posts that looks at the details of the theory, including the mathematics that underlies it and which was developed later by people like J. B. S. Haldane, Sewall Wright, and R. A. Fisher.
[Read more…]

Highway merging and the theory of evolution

Some time ago, I wrote about the best way for traffic to merge on a highway, say when a lane is closed up ahead. There are those drivers who begin to merge as soon as the signs warning of impending closure appear, thus making their lanes clear. Others take advantage of this lane opening up to drive fast right up to the merge point and then try to squeeze into the other lane.

I said that although people who followed the latter strategy were looked upon disapprovingly as queue jumpers, it seemed to me like the most efficient thing to do to optimize traffic flow was to follow the lead of the seemingly anti-social people and stay in the closed lane until the last moment since that had the effect of minimizing the length of the restricted road. To merge earlier meant that one had effectively made the restricted portion longer.
[Read more…]

Asking the wrong questions about science history

In his influential book The Structure of Scientific Revolutions, Thomas Kuhn points out that the kinds of questions we often ask about the history of science and that we think are simple and have been adequately answered (such as “who discovered oxygen and when?” “Who discovered X-rays and when?”) turn out on close examination to be extremely difficult, if not impossible, to answer.

It is not that there are no answers given in authoritative sources. It is that when we actually do examine the historical record, the situation turns out to be very murky, giving rise to the strong suspicion that such questions are the wrong ones to ask about the scientific enterprise. The simple answers that are given to such questions represent a rewriting of history to give readers a simple narrative but at the expense of giving a distorted sense of how science is done, as if scientific discoveries were clear and decisive events. I remember being very impressed by Kuhn’s examples to support his thesis when I first read his book and subsequent readings of science history have convinced me that he is right.

For example, the latest issue of the newsletter of the American Physical Society’s called the APS NEWS (vol. 16, no. 5, May 2007, p. 2) has an account of the discovery of the neutron. (The article is here but the current issue is password protected and non-APS members will have to wait a month before it is archived and people are given open access.) The title says “May 1932: Chadwick reports the discovery of the neutron” and recounts the familiar (to physicists anyway) story of how James Chadwick 75 years ago this month made the famous discovery for which he received the Nobel prize in 1935.

As the article proceeds to describe the history of the process, it becomes clear that its own story contradicts the impression given in the title.

As early as the 1920s, people had suspected that there was something in the atom’s nucleus other than protons. Some thought these additional particles were made up of an electrically neutral combination of the already known proton and electron but no one could confirm this. But experiments went on trying to isolate and identify the particle, and around 1930 two scientists Bothe and Becker found radiation coming from a target of Beryllium that had been bombarded with alpha particles. They thought that this radiation consisted of high-energy photons. Other experiments done by Frederic and Irene Joliot-Curie also found similar radiation that they too attributed to high-energy photons.

Chadwick thought that this explanation didn’t quite fit and did his own experiments and concluded that the radiation was caused by a new neutral particle that was slightly heavier than a proton. He called it the neutron. He published a paper in February 1932 where he suggested this possibility and then in May 1932 submitted another paper in which he was more definite. It is this paper that gives him the claim to be the discoverer.

But like all major scientific discoveries, acceptance of the new idea is not immediate within the community and it took until around 1934 for a consensus to emerge that this neutron was indeed a new fundamental particle.

So who “discovered” the neutron and when? Was it the people who concluded much earlier than 1932 that there was something else in the nucleus other than protons? They were right after all. Was it Bothe or Becker, or the Juliot-Curies who first succeeded in isolating this particle by knocking neutrons out of materials? They had, after all, “seen” isolated neutrons even if they had not identified it as such. Or do we give the honor to Chadwick for first providing a plausible claim that it was a neutron?

As to when the neutron was discovered, it is also hard to say. Was it when its existence was first suspect in the early 1920s? Or when it was first isolated experimentally around 1930? If we say that since the title of discoverer was awarded to Chadwick, the date of discovery has to be assigned to something he specifically did, when exactly did he realize that he had discovered the neutron? In his first preliminary paper in February 1932? Or in his more definite paper in May? Clearly he must have known what he knew before he submitted (or wrote) the papers.

All we know for sure is that sometime between 1930 and 1934, the neutron was “discovered” and that certain scientists played key roles in that process. For historical conciseness, we give the honor to Chadwick and fix the date as May 1932 and the judgment is not an unreasonable one, as long we insist on demanding that such events have a definite date and author. But it is good to be reminded that all such assignments of time and place and people for scientific discoveries mask a much more complex process, where “discoveries” involve extended periods of time involving large numbers of people during which understanding is increased incrementally. There is often no clear before-after split.

The detailed stories are almost always more fascinating than the truncated histories we are taught.

The nature of consciousness

In the model of Cartesian dualism, we think of the mind as a non-material entity that interacts somehow with the material brain/body in some way. Descartes thought that the locus of interaction existed within the pineal gland in the brain but that specific idea has long since been discarded.

But that still leaves the more fundamental idea, referred to now as Cartesian dualism, that states that I do have a mind that represents the essential ‘me’ that uses my material body to receive experiences via my senses, stores them in my memory, and orders actions that get executed by my body. This idea that there is an inner me is very powerful because it seems to correspond so intuitively with our everyday experience and the awareness that we have of our own bodies and the way we interact with our environment. Even the way we use language is intricately bound up with the idea that there exists some essence of ourselves, as can be seen by the way the words ‘we’ and ‘our’ was used in this and the previous sentences. The power of this intuitive idea of something or someone inside us controlling things has resulted in phrases like ‘the ghost in the machine’ or a ‘homunculus’ (from the Latin for ‘little man’) to describe the phenomenon.

For religious people, the mind is further mixed up with ideas of the soul and thus gains additional properties. The soul is considered to be non-material and can exist independently of the body, allowing for the possibility of an afterlife even after the body has ceased to exist. This soul model causes some problems that resist easy answers. For example, life begins with the creation of a single fertilized egg. This single fertilized cell (called a zygote) then starts to multiply to 2, 4, 8, 16 , 32,. . . cells and so on. All these cells are material things. At what stage along this progression did a non-material entity like the soul appear and attach itself to the collection of cells?

I think it is safe to say that almost all cognitive scientists reject the idea of a non-material mind, some kind of homunculus inside the brain somewhere that ‘runs’ us. This immediately rules out the religious idea of a non-material soul, at least in any traditional sense in which the word is used.

But even though the existence of a non-material mind or soul has been ruled out, the Cartesian dualistic model is still a seductive idea that can tempt even those who reject any religious ideas and accept a framework in which the material body (and brain) is all there is. The reason it is so seductive is that even if we discard the mind/body distinction as being based on a nonmaterial/material splitting, the idea of a central processing agent still seems intuitively obvious.

Consider a situation where I am responding to something in my environment. We know that we experience the external world through our five senses (sight, sound, smell, touch, taste) and that these senses are triggered by material objects coming into contact with the appropriate sense organs (eyes, ears, nose, skin, tongue) and excite the nerve endings located in those organs. These excitations are then transmitted along the nervous system to that part of our brains called the sensory cortex after which they. . .what?

At this point, things get a bit murky. Clearly these signals enter and proceed through our brain and excite the neural networks so that our brain becomes ‘aware’ of the phenomena we experienced, but the problematic issue is what exactly constitutes ‘awareness.’

Suppose for the moment we stop trying to understand the incoming process and switch to the outgoing process. It seems like we have the ability to make conscious and unconscious decisions (pick up a cup or shake our head) and then the brain’s neural networks send these signals to the part of the brain known as the motor cortex which transmits them to the appropriate part of the nervous system that sends the signal to the body part that executes the action by contracting muscles.

It seems reasonable to assume that in-between the end of the incoming pathway and the start of the outgoing pathway that I have described that there is some central part of the brain, a sort of command unit, that acts as a kind of clearing house where the incoming signals get registered and processed, stored in memory for later recall, older memories and responses get activated, theories are created, plans are made, and finally decisions for action are initiated.

As a metaphor for this command unit, we can imagine a highly sophisticated kind of home theater inside our brain where the screen displays what we see, speakers provide the sound, and is also capable of providing smell and touch and taste sensations, and banks of powerful computers by which memories can be stored and retrieved and action orders transmitted. ‘Conscious events’ are those that are projected onto this screen along with the accessory phenomena.

Daniel Dennett in his book Consciousness Explained (1991) calls this model the Cartesian Theater and warns against falling prey to its seductive plausibility. Accepting it, he points out, means that we are implicitly accepting the idea of a homunculus, or ghost in the machine, who is the occupant of this theater in the brain and who is the inner person, the ‘real me’ and what that inner person experiences is sometimes referred to as the ‘mind’s eye.’ One problem is that this approach leads to an infinite regress as we try to understand how the Cartesian Theater itself works.

But if this simple and attractive model of consciousness is not true, then what is? This is where things get a little (actually a whole lot) complicated. It is clear that it is easier to describe what cognitive scientists think consciousness is not than what they think it is.

More to come. . .

Does science destroy life’s mysteries?

One of the reasons that elite science and elite religion are now coming into conflict is that science is now addressing questions that once were considered purely philosophical. By ‘purely philosophical’ I mean questions that are serious and deep but for which answers are sought in terms of logic and reason and thought experiments, with the only data used being those that lie easily at hand or appeals to common everyday experience.

The difference with science is that the latter does not stop there but instead uses those things as just starting points for more esoteric investigations. It takes those initial ideas and converts them into research programs where the consequences of the ideas are deduced for well-defined situations that can be examined experimentally and tentative hypotheses can be tested.

Daniel Dennett in his book Consciousness Explained (1991) talks (p. 21) about how science tackles what he calls ‘mysteries’:

A mystery is a phenomenon that people don’t know how to think about – yet. There have been other great mysteries: the mystery of the origin of the universe, the mystery of life and reproduction, the mystery of the design to be found in nature, the mysteries of time, space, and gravity. These were not just areas of scientific ignorance but of utter bafflement and wonder. We do not yet have the final answers to any of the questions of cosmology and particle physics, molecular genetics and evolutionary theory, but we do know how to think about them. The mysteries haven’t vanished, but they have been tamed. They no longer overwhelm our efforts to think about the phenomena, because now we know how to tell the misbegotten questions from the tight questions, and even if we turn out to be dead wrong about some of the currently accepted answers, we know how to go about looking for better answers.

That passage, I think, captures well what happens when something enters the world of science. The mystery gets tamed and becomes a problem to be solved.

The charge that people sometimes make against science is that it seems to take away all the awe and mystery of life’s wonders by ‘explaining’ them. I have never quite understood that criticism. If at all, my sense of awe is enhanced by having a better understanding of phenomena. For example, I have always enjoyed seeing rainbows. Has my enjoyment become less now because I happen to know how multiple scattering of light in individual droplets of water produce the effect?

As another example, I recently listened to a magnificent concert of the Cleveland Orchestra playing Tchaikovsky’s Piano Concerto #1. It was a truly moving experience. Was my sense of awe at the brilliance of the composition and its execution diminished by my knowledge that the orchestra players were using their instruments to cause the air around them to vibrate and that those vibrations then entered my ear, got converted to nerve signals that entered my brain, which was then able to Fourier transform the signals into reconstructing rich orchestral ‘sounds’ that my brain used to trigger chemical reactions that resulted in my sense of emotional satisfaction? I don’t think so. I kind of like the fact that I can enjoy the experience on so many levels, from the purely experiential to the emotional and the cerebral. In fact, for me the truly awe inspiring thing is that we have reached such depths of understanding of something that would have seemed so mysterious just a few hundred years ago.

The taming of mysteries and converting them into planned research programs of investigation is now rapidly progressing in the areas of cognition and consciousness. The reason that this causes conflict is because such close examination can result in the philosophical justifications for religion being undermined.

For example, the existence of god is predicated on a belief in a Cartesian dualism. God is ‘out there’ somewhere separate from my body while ‘I’ am here encapsulated by my body, and there is some gateway that enables that boundary to be crossed so that ‘I’ can sense god. For many religious people, this contact between the ‘I’ and god is a deep mystery.

In some sense, Descartes started taming this mystery by postulating that the contact gateway lay in the pineal gland in the brain but he could not explain how the interaction between the non-material god and the material brain occurred. Of course, no one takes the special role of the pineal gland seriously anymore. But the basic Cartesian dualism problem remains for both religious and non-religious people, in the form of understanding the mind-brain split. What is the ‘I’ of the mind that makes decisions and initiates actions and seems to control my life? Does it exist as a non-material entity apart from the material brain? If so how does it interact with it, since the brain, being the place where our sensory system stores its information, is the source of our experiences and the generator of our actions?

Religious people extend this idea further and tend to think of the mind as somehow synonymous with the ‘soul’ and as a non-material entity that is separate from the body though occupying a space somewhere in the brain, or at least the body. It is the mind/soul that is the ‘I’ that interacts with a non-material god. So the mind/soul is the ‘real’ me that passes on to the next life after death and the body is just the temporary vehicle that ‘I’ use to interact with the material world.

Religious people tend to leave things there and suggest that the nature of the mind/soul and how it interacts with both the material world (including the body that encapsulates it) and god is a mystery, maybe even the most fundamental mystery of all, never to be understood. And for a long time, even scientists would have conceded that we had no idea how to even begin to address these questions.

But no longer. The cognitive scientists have tamed even this mystery and converted it into a problem. This does not mean that the problem of understanding the mind and consciousness has been solved. Far from it. But it does mean that scientists are now able to pose questions about the brain and consciousness in very concrete ways and suggest experiments to further advance knowledge. Although they do not have answers yet, one should be prepared for major advances in knowledge in this area.

And as these results start to come in, the prospects for maintaining beliefs in god and religion are not good. Because if history is any guide, the transition is always one way, from mystery to problem, and not the other way around. And once scientists see something as a problem to be solved, they tend to be tenacious in developing better and better theories and tools for solving it until only some details remain obscure. And the way the community of scientists build this knowledge structure is truly awe-inspiring.

So the answer to this post’s title is yes, science does destroy the mysteries but it increases the awe.

More to come. . .

Philosophy and science

An interesting example of the different ways that scientists and ‘pure’ philosophers view things arose in an exchange I had in the comments of a previous post.

Commenter Kenneth brought up an interesting argument that I had not heard before for the existence of the afterlife, an argument that he said had originally been proposed by the philosopher Spinoza (1632-1677). Basically the argument boiled down to the assumption that each one of us is simply a collection of atoms arranged in a particular way. When a person (A) dies, those atoms are dispersed and join the universe of atoms that percolate through space and time. But there is always the possibility that, purely by chance as a result of random motion, a set of atoms will arrange themselves in exactly the same arrangement that made up A when A was still alive. So thus A will have been ‘reborn.’ Kenneth argues that thus the existence of life after death has been established, at least in principle.

The nature of the argument can be perhaps understood better with a simpler example of thoroughly mixing ink and water in a glass and then leaving it alone to sit undisturbed. We would think that this mixing is an irreversible process and that separation into water and ink again would not be possible except as a result of extraordinary efforts by external agents. But in fact if you simply wait long enough, there is a very remote possibility that the random motion of the individual ink and water molecules will result in a momentary spontaneous separation of the mixture in the container into two separate regions, one of pure water and the other of purely ink molecules (whatever ink molecules are).

Since all that this argument requires is the ability to wait for a very long time for which these unlikely events to occur, Kenneth has satisfied himself, from a philosophical point of view, that Spinoza’s argument is valid. And that once we concede the possibility that someone’s atoms can be reconstituted in its original form, the existence of life after death has been established, at least in principle

But science does not limit itself to these ‘in principle’ arguments. Such arguments are just the first steps. Science is always looking at the detailed consequences of such ideas in order to translate them into research programs. And this is where Spinoza’s argument for the possibility of an afterlife breaks down.

For one thing, the human body is not just an arrangement of atoms, like that of molecules in a mixture of ink and water, or the oxygen and nitrogen molecules in a container of air. The atoms in the human body are bound together in complex organic molecules, which are in turn held together by other forces to form cells and tissues and so on. It is not enough to just bring the atoms together, you also have to create the chemical reactions that fuse them into these molecules, and this requires energy from the outside used in a very directed way.

It is like frying an egg in a pan. Just breaking an egg into a skillet and leaving it there will not result in a fried egg, however long you wait, unless there is a source of energy to drive the reaction forward. A fried egg is not just a rearrangement of the atoms in a raw egg. It is one in which new compounds have been created and the creation of these compounds is a non-random process.

In addition, the probability of all the atoms that make up your body randomly arriving at the same locations that they occupied when you were alive is microscopically small. This is not a source of concern to Kenneth because all he needs is that this probability not be zero in order to satisfy his ‘in principle’ condition. But there is an inverse relationship between the probability of an event and the likely time that you would have to wait for the event to occur. For example, if you repeatedly throw a die, you would have to wait longer to get a six than to get just any even number because the probability of the former is less than that of the latter.

In the case of the body’s atoms coming together again, the probability is so small that the expected time for it to occur would be incredibly long. Again, it would not matter if this were a philosopher’s ‘in principle’ argument. But those arguments tacitly assume that nothing else is changing in the environment and that we have an infinite amount of time in the world to wait for things to occur.

But in reality events are never in isolation and science is always concerned about the interconnectedness of things. And this is where the ‘in principle’ argument breaks down. We know that the lifetime of the Sun is about ten billion years and that it will then become a huge ‘red giant’ that will grow enormously and even envelop the Earth. And later still, all the energy producing nuclear reactions in the stars will end, resulting in the heat death of the universe. So there will not be any surplus energy around, even in principle, to drive the chemical reactions to reconstitute the body’s molecules, even if they did manage to arrive randomly in exactly the right positions.

I think that this is where scientific research and philosophical speculations diverge. A scientist is not interested in just ‘in principle’ arguments for the afterlife of the kind that Kenneth says Spinoza makes. To be become interesting to scientists, Kenneth will have to provide at least numerical estimates of the probability the body’s atoms reconstituting themselves, and then use that probability to estimate the expected time for such an event to occur.

If that time is more than the expected heat death of the universe, then the question becomes moot. If it is less, then the scientist will ask if there is enough free energy at that time to drive the reaction forward and what is the probability that this energy will spontaneously be directed at the atoms in just the right amounts and directions to recreate the human body.

All these considerations, when brought together, suggest that Spinoza’s argument fails and that life after death as proposed by him is not going to ever happen.

That is the kind of difference between the approaches of pure philosophy and science.

Alternative realities

One of the things that I have noticed in recent years is the proliferation of what I call ‘alternative realities’.

In classical learning theory, it is believed that when someone confronts evidence that runs counter to that person’s prior knowledge, a state of cognitive dissonance occurs in the mind of the learner which only goes away when the learner’s knowledge structures have been adjusted to accommodate the new information.

This model of learning underlies what are known as ‘inquiry’ methods of teaching science where the teacher, having an understanding of what her students are likely to erroneously believe about some phenomena (such as electricity), deliberately sets up experiments for them to do whose results will directly confront their misconceptions, thus forcing the student into the difficult process of re-evaluation of what they already believe. By repeatedly going through this process at different levels of sophistication and context, the hoped for transformation is that the student develops an experiential understanding of the ‘true’ theory that the teacher is trying to teach.
[Read more…]

The science-religion debate

The ABC news ‘Face Off”, the ‘great’ debate between religion and atheism, was broadcast on Nightline last week. You can see the video of the program here. (You may be able to find the video of the full debate here.)

The side arguing for God’s existence was evangelist Ray “Banana Man” Comfort and his trusty sidekick Boy Wonder Kirk Cameron. The side arguing against was Brian “Sapient” (not his real last name) and Kelly, the creators of the Blasphemy Challenge and the people behind the Rational Response Squad.

The debate was initiated by Comfort who had contacted ABC News and requested it, saying that he could prove god’s existence. He set the bar for himself quite high. He promised ABC News that he would “prove God’s existence, absolutely, scientifically, without mentioning the Bible or faith” and added that “I am amazed at how many people think that God’s existence is a matter of faith. It’s not, and I will prove it at the debate – once and for all. This is not a joke. I will present undeniable scientific proof that God exists.”

The video of the program shows that the ‘debate’ was at a disappointingly low level, although to be fair the debate lasted for about 90 minutes and only edited portions were shown. From the outset, Comfort broke his promise, invoking both the Bible and faith. But even when it came to the ‘science’ part of his argument, he resorted once again to the tired Paley’s watch/Mount Rushmore arguments.

The shorter version of this old argument is this: “We can immediately tell when something is designed. If something is designed, it must have a designer. Nature looks designed to us and therefore must have been designed. That designer can only be god.”

The operational and philosophical weaknesses of this argument has been exposed by many people, including me, so that anyone who advances it cannot really be taken seriously unless they address those challenges to it. As far as I can see, Comfort did not do this. Although Comfort had previously alleged that the banana was the “atheist’s nightmare” (because it fits so perfectly in the human hand and human mouth, the banana and human hand and mouth had to have been designed that way) he did not bring bananas along as props. Perhaps he had been warned that his video of that claim has been the source of widespread merriment.

Kirk Cameron’s role seemed to be to undermine evolutionary theory but the clips of him doing that showed an embarrassing ignorance and shallowness. He invoked the old argument about the paucity of transitional forms but even here he brought it up in a form that would have made even those sympathetic to his point of view wince. He seemed to have the bizarre notion that evolution by natural selection predicts the existence every possible intermediate state between all existing life forms. He showed artist’s sketches of things that he called a “croc-o-duck (a duck with the head of a crocodile) and a “bull frog” (consisting of an animal that was half-bull and half-frog) and argued that the fact that we do not see such things means that evolution is wrong. Really. It was painful to watch him make a fool of himself on national TV.

Cameron seems to be suffering from an extreme form of a common misunderstanding about transitional forms. The fact that humans and other existing animals share common ancestors does not imply that there should be forms that are transitional between them as they exist now. What evolutionary theory states is that if you take any existing organism and follow its ancestors back in time, you will have a gradual evolution in the way the organisms look. So when we talk about transitional forms, we first have to fix the two times that set the boundaries. If we take one boundary as the present time and the other boundary as (say) four billion years ago when the first eukaryotic cell appeared, then there are a large number of transitional forms between those two forms. Richard Dawkins book The Ancestor’s Tale gives an excellent account of the type and sequence of the transitional forms that have been found. Of course, these ancestral forms have evolved along the many descendant forms so we would not expect to see them now in the same form they were when they were our ancestors. They can only be found in that form as fossils.

The DNA sequencing shows the connections between species as well and provide further evidence of the way species branched off at various points in time. So when evolutionary biologists speak of ‘transitional forms’, they are referring to finding fossils of those ancestors who preceded various branch points. The recent discovery of Tiktaalik, the 375-million year old fossil that has the characteristics of what a common ancestor of fish and mammals and amphibians would look like, is one such example. So is Archaeopteryx as a transitional form.

The ‘missing link’ argument against evolution, although lacking content, is one that will never die. One reason is the existence of people like Cameron who use it incorrectly. Another is that it is infinitely adaptable. For example, suppose you have a species now and a species that existed (say) two billion years ago and demand proof of the existence of a missing link. Suppose a fossil is found that is one billion years old that fits the bill. Will this satisfy those who demand proof of the missing link? No, because opponents of evolution can now shift their argument and demand proofs of the existence of two ‘missing’ links, one between the fossils of two and one billion years ago, and the other between one billion years ago and the present. In fact, the more transitional fossils that are found, the more ‘missing links’ that can be postulated!

This is what has happened with past discoveries of fossils. The fossil record of evolution has been getting steadily greater but the calls for ‘proof’ of the existence of missing links have not diminished.

POST SCRIPT: Antiwar.com fundraising drive

The website Antiwar.com is having a fundraiser. If you can, please support it. It is an invaluable source of news and commentary that is far broader and deeper than you can find almost anywhere else.

The new atheism-6: The biological origins of religion and morality

(See part 1, part 2, part 3, part 4, and part 5.)

You would think that natural selection would work against religion because those individuals who spent their time in prayer and other rituals, and used precious energy and resources in building temples and offering sacrifices, would be at a survival disadvantage when compared to those who used their time more productively. In the previous post, I outlined the basic framework of natural selection and summarized the arguments of those who explain the survival value of religion by saying that religious ideas are passed on and evolve as a byproduct of the survival advantage that accrues from young children being predisposed to believe their parents and other adult authority figures.

But while that may explain how religions propagate once they come into being, it is harder to understand how religious ideas arose in the first place. If the outbreak of religion were an occasional event occurring here or there at random, then we could just dismiss it as an anomaly, like the way that random genetic mutations cause rare diseases. But religion is not like that. As David P. Barash says in The Chronicle of Higher Education (Volume 53, Issue 33, Page B6, April 20, 200.): “On the one hand, religious belief of one sort or another seems ubiquitous, suggesting that it might well have emerged, somehow, from universal human nature, the common evolutionary background shared by all humans. On the other hand, it often appears that religious practice is fitness-reducing rather than enhancing — and, if so, that genetically mediated tendencies toward religion should have been selected against.”
[Read more…]

The new atheism-5: The scientific approach to philosophical questions

(See part 1, part 2, part 3, and part 4.)

The biological sciences approach to the questions of the origins of religious belief and morality is not to ask what the proximate causes are that led to belief in god and the afterlife (for which the answers may be to satisfy curiosity and provide comfort) but to see what evolutionary advantage accrues to those individuals who hold such beliefs, because natural selection works on individual organisms, not groups.

To better understand how evolutionary biology addresses these questions, it is useful to review the basic tenets of evolution by natural selection. Following Philip Kitcher’s The Advancement of Science, (p.19), Darwin’s four fundamental evidentiary claims can be stated as follows:
[Read more…]