Whoa. This was a data-rich talk, and my ability to transcribe it was over-whelmed by all the stuff Hauser was tossing out. Unfortunately, I think the talk also suffered from excess and a lack of a good overview of the material. But it was thought-provoking anyway.
One of the themes was how people resolve moral dilemmas. He began with a real world example, the story of an overweight woman in South Africa who insisted on joining a tour exploring a cave, and got stuck in the exit tunnel, trapping 22 people behind her. Do you sacrifice one to save many? One of the trapped people was a diabetic who needed to get out—should they have blown up the woman so the others could escape? This was presented as a kind of philosophical trolley problem, and the audience was asked what was best to do…but I don’t think it works, because unlike those philosophical dilemmas, in the real world we pursue different strategies, and it’s rarely a black and white situation where one has to choose between precisely two possibilities — as in this case, which was resolved by greasing her up with paraffin and pulling her out.
Hauser gave an overview of the philosophical explanations for making moral decisions.
-
Hume: morality intuitive, unconscious, emotional
-
Kant: rational, conscious, justified principles
-
Theist: divine inspiration, explicit within scripture
-
Rawls: intuitive, unconscious, grammar of action: not emotional, built on principles
He’s going to side with Rawls. The key difference between a Rawlsian morality and the others is that a moral decision is made unconsciously, and THEN emotional and rational justifications are made for it. This is testable if you have a way to remove the emotional component of decision; a Rawlsian moral agent will still make the same moral judgments. Studies of brain damaged patients with loss of emotional affect support the idea so far.
He analogized this to linguistics, in which we make abstract, content-free computations to determine, for instance, whether a particular sentence is grammatical. This computation is obligatory and impenetrable; we can’t explain the process of making the decision as we’re doing it, although we can construct rules after the fact.
For instance, he summarized three principles that seem to be general rules in moral judgments.
-
Harm intended as the means to a goal is worse than harm seen as a side-effect.
-
Harm caused by action is morally worse than harm caused by omission.
-
Harm caused by contact is morally worse than equivalent harm caused by non-contact
We don’t judge morality purely on the basis of reasonable outcomes, but also on intent. He suggested that judging only on the basis of whether an outcome is bad or good is a primitive and simplistic strategy, that as people mature they add nuance by considering intentionality — someone who poisons a person accidentally is less morally culpable than someone who does it intentionally.
One example he gave that I found a bit dubious is the use of Transcranial Magnetic Stimulation to shut down regions of brain, in particular the right temporal/parietal junction (which seems to be a locus of intent judgment). In subjects that have that region zapped (a temporary effect!) all that matters is outcome. These studies bother me a bit; I don’t know if I really trust the methodology of TMS, since it may be affecting much more in complex and undefined ways.
Does knowledge ofthe law affect moral judgments? Holland no longer makes a legal distinction betwwen active and passive euthanasia, and many Dutch people are able to articulate a belief that passive euthanasia is less human than active euthanasia. Do the Dutch no longer percieve the action/omission distinction in Hauser’s 3 rules? In a dilemma test, they still make the same distinctions on active and passive stories as others do — actively killing someone to save others is morally worse than simply allowing someone to die by inaction to have the same effect — which again suggests that the underlying mechanisms of making moral decisions are unchanged.
In these same dilemma tests, they’ve correlated outcomes with demographic data. The effects of religion, sex, etc. are negligible on how people make moral decisions.
He makes an important distinction: These are effects on judgment, not behavior. How does behavior connect with judgment?
Hauser describe Mischel’s longitudinal studies of kids given a simple test: they were given a cookie, and told they’d get more if they could hold off on eating it for some unspecified length of time. Kids varied; some had to have that cookie right away, others held off for longer periods of time. The interesting thing about this experiment is that the investigator looked at these same kids as adults 40 years later, and found that restraint in a 3 year old was correlated with greater marital stability, for instance, later in life. The idea is that these kinds of personal/moral capacities are fixed fairly early in people and don’t seem to be affected much by experience or education.
There were some interesting ideas here, and I would have liked to have seen more depth of discussion of individual points. The end of the talk, in particular, was a flurry of data and completely different experiments that weren’t tied in well with the thesis of the talk, and there weren’t opportunities for questions in these evening talks, so it was a bit difficult to sort everything out.
summumbonumlitteraexli says
There is no such thing as “morality”, “right” or “wrong”.
All determinations of “right” and “wrong” can be nothing more than common social consensus, generated by hormonal and other random biological evolutions that were the most successful at best assuring the continued reproduction and evolution of the original protocellular organism. So, if at all, any definition was given, “right” becomes whatever best assures the organisms continued permanent survival and forward evolution, and “wrong” becomes anything that in anyway works against those needs.
Without God…