On writing-2: Why do we cite other people’s work?

In the previous post on this topic, I discussed the plagiarism case of Ben Domenech, who had lifted entire chunks of other people’s writings and had passed them off as his own.

How could he have done such a thing? After all, all high school and college students get the standard lecture on plagiarism and why it is bad. And even though Domenech was home schooled, it seems unlikely that he thought this was acceptable practice. When he was confronted with his plagiarism, his defense was not one of surprise that it was considered wrong but merely that he had been ‘young’ when he did it or that he had got permission from the author to use their words or that the offending words had been inserted by his editors.

The cautionary lectures that students receive about plagiarism are usually cast in a moralistic way, that plagiarism is a form of stealing, that taking someone else’s words or ideas without proper attribution is as morally reprehensible as taking their money.

What is often overlooked in this kind of approach is that there are many other reasons why writers and academics cite other people’s works when appropriate. By focusing too much on this stealing aspect, we tend to not give students an important insight into how scholarship and research works.

Russ Hunt at St. Thomas University argues that writers cite others for a whole complex of reasons that have little to do with avoiding charges of plagiarism:

[P]ublished scholarly literature is full of examples of writers using the texts, words and ideas of others to serve their own immediate purposes. Here’s an example of the way two researchers opened their discussion of the context of their work in 1984:

To say that listeners attempt to construct points is not, however, to make clear just what sort of thing a ‘point’ actually is. Despite recent interest in the pragmatics of oral stories (Polanyi 1979, 1982; Robinson 1981), conversations (Schank et al. 1982), and narrative discourse generally (Prince 1983), definitions of point are hard to come by. Those that do exist are usually couched in negative terms: apparently it is easier to indicate what a point is not than to be clear about what it is. Perhaps the most memorable (negative) definition of point was that of Labov (1972: 366), who observed that a narrative without one is met with the “withering” rejoinder, “So what?” (Vipond & Hunt, 1984)

It is clear here that the motives of the writers do not include prevention of charges of plagiarism; moreover, it’s equally clear that they are not. . .attempting to “cite every piece of information that is not a) the result of your own research, or b) common knowledge.” What they are doing is more complex. The bouquet of citations offered in this paragraph is informing the reader that the writers know, and are comfortable with, the literature their article is addressing; they are moving to place their argument in an already existing written conversation about the pragmatics of stories; they are advertising to the readers of their article, likely to be interested in psychology or literature, that there is an area of inquiry — the sociology of discourse — that is relevant to studies in the psychology of literature; and they are establishing a tone of comfortable authority in that conversation by the acknowledgement of Labov’s contribution and by using his language –“withering” is picked out of Labov’s article because it is often cited as conveying the power of pointlessness to humiliate (I believe I speak with some authority for the authors’ motives, since I was one of them).

Scholars — writers generally — use citations for many things: they establish their own bona fides and currency, they advertise their alliances, they bring work to the attention of their reader, they assert ties of collegiality, they exemplify contending positions or define nuances of difference among competing theories or ideas. They do not use them to defend themselves against potential allegations of plagiarism.

The clearest difference between the way undergraduate students, writing essays, cite and quote and the way scholars do it in public is this: typically, the scholars are achieving something positive; the students are avoiding something negative. (my italics)

I think that Hunt has hit exactly the right note.

When you cite the works of others, you are strengthening your own argument because you are making them (and their allies) into your allies, and people who challenge what you say have to take on this entire army. When you cite reputable sources or credible authorities for facts or ideas, you become more credible because you are no longer alone and thus not easily dismissed, even if you personally are not famous or a recognized authority.

To be continued. . .

POST SCRIPT: It’s now Daylight Saving Time. Do you know where your spiritual plane is?

It seems like idiotic statements attributing natural events to supernatural causes are not restricted to Christian radical clerics like Pat Robertson. Some Sri Lankan Buddhist clergy are challenging him for the title of Religious Doofus.

Since Sri Lanka sits very close to the equator, the length of the day is the same all year round, not requiring the ‘spring-forward-fall-back’ biannual adjusting of the US. Sri Lankan time used to be 5.5 hours ahead of Universal Time (UT) but in 1996 the government made a one-time shift it to 6.5 hours in order to have sunset arrive later and save energy. But the influential Buddhist clergy were not happy with the change. As a compromise, the clocks were then again adjusted to make it just 6.0 ahead of UT as a compromise. Now the government is thinking of going back to the original 5.5. hours.

Some of the country’s Buddhist clergy are rejoicing at the prospect of a change because they say Sri Lanka’s “old” time fitted better with their rituals.

They believe a decade living in the “wrong” time has upset the country’s natural order with terrible effect.

The Venerable Gnanawimala says the change moved the country to a spiritual plane 500 miles east of where it should be.

“After this change I feel that many troubles have been caused to Sri Lanka. Tsunamis and other natural disasters have been taking place,” he says.

This is what happens when you mix religion and the state. You now have to worry about what your actions are doing to the longitudinal coordinates of your nation’s spiritual plane.

No more daft women!

Evan Hunter, who was the screenwriter on Alfred Hitchcock’s 1963 film The Birds recalled an incident that occurred when he was discussing the screenplay with the director.

I don’t know if you recall the movie. There’s a scene where after this massive bird attack on the house Mitch, the male character, is asleep in a chair and Melanie hears something. She takes a flashlight and she goes up to investigate, and this leads to the big scene in the attic where all the birds attack her. I was telling [Hitchcock] about this scene and he was listening very intently, and then he said, “Let me see if I understand this correctly. There has been a massive attack on the house and they have boarded it up and Mitch is asleep and she hears a sound and she goes to investigate?” I said, “Well, yes,” and he said, “Is she daft? Why doesn’t she wake him up?”

I remembered this story when I was watching the film The Interpreter with Nicole Kidman and Sean Penn. The Kidman character accidentally overhears something at the UN that puts her life at risk. After she complains to government agent Penn that no one seems to be bothered about protecting her from harm, Penn puts her on round-the-clock surveillance. So then what does Kidman do? She sneaks around, giving the slip to the very people assigned to protect her and refuses to tell Penn where she went and to whom she spoke and about what, causing herself and other people to be put at risk and even dying because of her actions. Hitchcock would have said, “Is she daft?”

This is one of my pet peeves about films, where the female character insists on doing something incredibly stupid that puts her and other people at peril. Surely in this day and age we have gone beyond the stale plot device of otherwise smart women behaving stupidly in order to create drama? Surely writers have more imagination than that? Do directors really think that viewers won’t notice how absurd that is?

According to Hunter, Hitchcock was always exploring the motivations of characters, trying to make their actions plausible. Hunter says:

[Hitchcock] would ask surprising questions. I would be in the middle of telling the story so far and he would say, “Has she called her father yet?” I’d say, “What?” “The girl, has she called her father?” And I’d say, “No.” “Well, she’s been away from San Francisco overnight. Does he know where she is? Has she called to tell him she’s staying in this town?” I said, “No.” And he said, “Don’t you think she should call him?” I said, “Yes.” “You know it’s not a difficult thing to have a person pick up the phone.” Questions like that.

(Incidentally, the above link has three screenwriters Arthur Laurents, who wrote Rope (1948), Joseph Stefano, who wrote Psycho (1960), and Evan Hunter reminiscing about working with Hitchcock. It is a fascinating glimpse behind the scenes of how a great director envisages and sets about creating films. The last quote actually reads in the original: “Yes, you know it’s not a difficult thing to have a person pick up the phone.” I changed it because my version makes more sense, and the original is a verbatim transcript of a panel discussion, in which such kinds of punctuation errors can easily occur.)

More generally, I hate it when characters in films and books behave in ways that are unbelievable. The problem is not with an implausible premise, which is often necessary to create a central core for the story. I can even accept the violation of a few laws of physics. For example, I can accept the premise of Superman that a baby with super powers (but susceptible to kryptonite) arrives on Earth from another planet and is adopted by a family and needs to keep his identity secret. I can accept of Batman that a millionaire like Bruce Wayne adopts a secret identity in order to fight crime.

What I cannot stand is when they and the other people act implausibly, when the stories built on this premise have logical holes that you can drive a Batmobile through. The latter, for example, is a flashy vehicle, to say the least, easily picked out in traffic. And yet, nobody in Gotham thinks of following it back to the Batcave, to see who this mysterious hero is. Is the entire population of that city daft?

And how exactly does the Bat-Signal that the Police Commissioner lights up the sky with supposed to work? You don’t need a physics degree to realize that shining a light, however bright, into the sky is not going to create a sharp image there. And what if it’s daytime? And if there are no clouds? (It’s been a long time since I read these comics. Maybe the later editions fixed these problems. But even as a child these things annoyed me.)

And don’t get me started on Spiderman going in and out of his apartment window in a building in the middle of a big city in broad daylight without anyone noticing.

As a fan of films, it really bugs me when filmmakers don’t take the trouble to write plots that make sense, and have characters who don’t behave the way that you would expect normal people to behave. How hard can it be to ensure this, especially when you have the budget to hire writers to create believable characters and a plausible storyline?

If any directors are reading this, I am willing to offer my services to identify and fix plot holes.

So please, no more daft women! No more ditzy damsels in distress! No more Perils of Pauline!

POST SCRIPT: CSA: Confederate States of America

I saw this film last week (see the post script to an earlier posting), just before it ended its very short run in Cleveland. It looks at what history would have been like if the south had won the civil war. Imagine, if you will, an America very much like what we have now except that owning black slaves is as commonplace as owning a dishwasher.

What was troubling is that although this is an imagined alternate history presented in a faux documentary format, much of it is plausible based on what we have now. What was most disturbing for me was seeing in the film racist images and acts that I thought were the over-the-top imaginings of the screenwriter about that might have happened in this alternate history, and then finding out that they actually happened in the real history.

Although the film is a clever satire in the style of This is Spinal Tap, I could not really laugh because the topic itself is so appalling. It is easy to laugh at the preening and pretensions of a rock band. It is hard to laugh at people in shackles.

But the film was well worth seeing, disturbing though it was.

On writing-1: Plagiarism at the Washington Post

If you blinked a couple of weeks ago, you might have missed the meteor that was the rise and fall of the career of Ben Domenech as a blogger for WashingtonPost.com.

This online version of the newspaper is apparently managed independently of the print edition and has its own Executive Editor Jim Brady. For reasons that are not wholly clear, Brady decided that he needed to hire a “conservative” blogger for the website.

The problem with this rationale for the hiring was that no “liberal” counterpart blogger existed at the paper. They did have a popular blogger in Dan Froomkin, someone with a journalistic background, who wrote about politics for the Post and who had on occasion been critical of the Bush White House. As I have written earlier, Glenn Greenwald has pointed out that anything but unswavering loyalty to Bush has become the basis for identifying someone as liberal, and maybe Brady had internalized this critique, prompting him to hire someone who could be counted upon to support Bush in all his actions.

For reasons that are even more obscure, rather than choose someone who had serious journalistic credentials for this new column, Brady selected the untested 24-year old Ben Domenech. It is true that Domenech was something of a boy wonder, at least on paper. He had been home-schooled by his affluent and well-connected Republican family. He then went to William and Mary and wrote for their student newspaper The Flat Hat. He dropped out of college before graduating and co-founded a conservative website called Redstate, where he wrote under the pseudonym Augustine.

His father was a Bush political appointee and his new online column for the Washington Post (called Red America) said in its inaugural posting on March 21 that young Ben “was sworn in as the youngest political appointee of President George W. Bush. Following a year as a speechwriter for HHS Secretary Tommy Thompson and two as the chief speechwriter for Texas Senator John Cornyn, Ben is now a book editor for Regnery Publishing, where he has edited multiple bestsellers and books by Michelle Malkin, Ramesh Ponnuru, and Hugh Hewitt.”

Not bad for a 24-year old without a college degree. And his bio lists even more accomplishments. But getting his own column in WashingtonPost.com was the peak. Soon after that things started going downhill very rapidly.

His decline began when bloggers looked into his writings and found that, as Augustine, he had written a column of the day of Coretta Scott King’s funeral calling her a Communist. This annoyed a lot of people who then started looking more closely at his other writings. It was then that someone discovered that he had plagiarized. And the plagiarism was not subtle. Take for example this excerpt from his review of the film Bringing out the Dead.

Instead of allowing for the incredible nuances that Cage always brings to his performances, the character of Frank sews it all up for him.

But there are those moments that allow Cage to do what he does best. When he’s trying to revive Mary’s father, the man’s family fanned out around him in the living room in frozen semi-circle, he blurts out, “Do you have any music?”

Now compare it with an earlier review posted on Salon.com,

Instead of allowing for the incredible nuance that Cage always brings to his performances, the character of Frank sews it all up for him. . . But there are those moments that allow Cage to do what he does best. When he’s trying to revive Mary’s father, the man’s family fanned out around him in the living room in frozen semi-circle, he blurts out, “Do you have any music?”

Or this sampling from P. J. O’Rourke’s book Modern Manners, which also found its way into Domenech’s columns:

O’Rourke, p.176: Office Christmas parties • Wine-tasting parties • Book-publishing parties • Parties with themes, such as “Las Vegas Nite” or “Waikiki Whoopee” • Parties at which anyone is wearing a blue velvet tuxedo jacket

BenDom: Christmas parties. Wine tasting parties. Book publishing parties. Parties with themes, such as “Las Vegas Nite” or “Waikiki Whoopee.” Parties at which anyone is wearing a blue velvet tuxedo jacket.

O’Rourke: It’s not a real party if it doesn’t end in an orgy or a food fight. • All your friends should still be there when you come to in the morning.

BenDom: It’s not a real party if it doesn’t end in an orgy or a food fight. All your friends should still be there when you come to in the morning.

These are not the kinds of accidental plagiarisms that anyone can fall prey to, where a turn of phrase that appealed to you when you read it a long time ago comes out of you when you are writing and you do not remember that you got it from someone else. These examples are undoubtedly deliberate cut-and-paste jobs.

Once the charges of plagiarism were seen to have some credibility, many people went to Google and the floodgates were opened, Kaloogian-style, with bloggers all over poring over his writings. Within the space of three days a torrent of further examples of plagiarism poured out. These new allegations dated back to his writings at his college newspaper and then later for National Review Online, and Domenech was found to have lifted material from Salon and even from National Review Online, the latter being the same publication for which he was writing, which adds the sin of ingratitude to the dishonesty.

On March 24, just three days after starting his Washington Post column, Ben Domenech resigned under pressure. Soon after, he also resigned as book editor at Regnery.

What can we learn from this? One lesson seemingly is that people can get away with plagiarism for a short while, especially if they are writing in obscurity for little known publications. While he was writing for his college newspaper and even for his own website, no one cared to closely look into his work. Even his future employers at WanshintonPost.com did not seem to have checked him out carefully. Apparently his well-connected family and sterling Bush loyalty was enough to satisfy them that he was a good addition to their masthead.

But as soon as a writer becomes high profile, the chances are very high these days that any plagiarism will come to light.

At one level, this is a familiar cautionary tale to everyone to cite other people’s work when using it. For us in the academic world, where plagiarism is a big no-no, the reasons for citing are not just there are high penalties if you get caught not doing it. The more important reasons arise from the very nature of scholarly academic activity, which I shall look at in a future posting.

To be continued. . .

Changing notions of death-4: Implications for animals

(See part 1, part 2 and part 3 of this series.)

If asked, any one of us would say that we value life, that we consider it precious and not to be taken lightly. While the specific phrase “valuing the culture of life” seems to have been co-opted by those who are specifically opposed to abortion, the general idea that it encapsulates, that life should not be taken casually or at all, is one that all of us would subscribe to.

But of course there are contradictions. People who say they value life often see no problem with supporting the death penalty. Another hypocrisy is with those who support killing in wars, even of civilians, and even in large numbers. We try to rationalize this by saying that civilians are killed inadvertently, but that is a false argument. Civilians are inevitably killed in wars, often deliberately, and we often do nothing to condemn it when it is done by ‘our side.’ To support wars is to support killing and absolve killers, however much we try to sugar coat this unpleasant fact. As Voltaire said, “It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets.”

In his lecture, Peter Singer pointed out that killing and eating animals, while opposing the withdrawal of life support of those in a persistent vegetative state, poses an ethical problem for people who say that they value a “culture of life.”

He gave as an example the fact that while the 3,000 or so victims of September 11, 2001 were deeply mourned, no one mourned the fact that millions of chickens were killed on that same day and every day before and since. But we do not mourn them the same way. Why not?

If we define death as heart dead or brain dead, then the chickens are as alive as any of us. Even when we lower the bar to thinking of someone in a persistent vegetative state as being ‘effectively dead’, that still does not get us off the hook since, as Singer argued, chickens and other animals have higher levels of consciousness than people in a persistent vegetative state. Free range chickens seem to show signs of happiness, curiosity, anxiety, fear, and the sense of self-awareness that, if present in humans, would definitely bar us from killing them. If that is the case, then if we oppose the withdrawal of life support systems even from those in a persistent vegetative state, then how can we justify killing chickens, or any other animal for that matter?

He posed the question of why the killing of human beings is deplored but that of chickens is not. He said that appealing to species chauvinism (“We are human, and so are justified in valuing human life over non-human animal life.”) was not really an ethically justifiable defense, though many people used it.

After all, if we allowed that particular chauvinist line of defense, where do we draw the line? What if I say that because I am male, I am justified in thinking that the lives of women are worth less than that of men? We would reject that line of argument as rank sexism. What if I say that because I am brown skinned, I am justified in treating non-brown people as inferior? We would reject that argument as rank racism. So why should we think that the argument “I am human so I am justified in valuing human life over animal life?” is acceptable?

Singer’s point was that as soon as we shift our definition of death from that defined by the complete lack of heart or brain function, and to judgments about the nature or level of the consciousness involved, we have gone into ethically tricky territory for those non-vegetarians who argue that because of belief in a “culture of life,” human beings must be kept alive at all costs. Because you cannot argue that people in a persistent vegetative state should be kept on life support while arguing that perfectly healthy animals can be killed.

People of certain religious traditions (like Christians, Jews, and Muslims) perhaps can find justification for this discrepant behavior by appealing to their religious beliefs that include species chauvinism as part of their doctrines. In the view of these religions, humans are specially favored by god and thus fundamentally different from, and superior to, other animals so valuing human life and disregarding non-human animal life is allowable. It is noteworthy that Buddhism and Hinduism do not assert such a species chauvinistic attitude. They seem to treat human and non-human animals on an equal footing and vegetarianism is advocated by both religions.

But if we leave out religious sanction and argue on strictly ethical grounds, it becomes hard to justify opposing the withdrawal of life support systems to people who are in a persistent vegetative state on the grounds that such people are still ‘alive’, and square it with the killing of healthy animals for food, as we routinely do.

Singer made a cogent argument that none of us can really ethically justify the killing of animals for food, when it is not necessary for survival. Singer himself is a vegetarian.

I am not sure if Singer was able to resolve some of the ethical issues of what constitutes death by the end of his talk, after I had left. But his ideas were very thought provoking.

POST SCRIPT: Juggling

Good jugglers are amazing. For a fine example of this art, go here and then click on “Watch Chris Bliss.”

Internet sleuthing

I am a firm believer in cooperative learning. The combined efforts of many people can produce results that would be impossible for a single person. And the internet is a wonderful mechanism for enabling collective action.

What follows is a modern-day detective story that illustrates what is possible when the collective strength of people working together, sharing information and ideas, and building on each others’ ideas, combined with the speed of communication and resources available on the internet.

Howard Kaloogian is a Republican candidate in San Diego’s special congressional election to replace disgraced Republican Duke Cunningham, who pleaded guilty to bribery, resigned his seat, and is now in jail. In his campaign, Kaloogian tried to propagate the White House meme that things are just peachy in Iraq and that the media is deliberately sabotaging the war effort, painting a dark picture by reporting only the daily bombings, beheadings, kidnappings, extortions, etc.
[Read more…]

Changing notions of death-3: Doctors versus guardians

In part 1 and part 2 of this series of posts, we saw how the idea of when someone had died had shifted to the point where people in a persistent vegetative state could have their life support systems removed because they are considered to be ‘effectively’ dead. But even if the family is agreed on what action should be taken with a family member in a persistent vegetative state, there are already moves under way to shift the bar even lower. The question now being raised is as to what should be done if the doctors determine that further treatment is futile but the family does not want to remove life support.

In a recent case in England, doctors had recommended that an 18-month old infant (identified as just MB) who suffers from the severest form of spinal muscular atrophy – an incurable and progressively worsening condition leading to complete paralysis – be allowed to die. The parents objected and the matter went to trial.

On March 15, 2006, the judge ruled in the parents’ favor, refusing to declare that it would be lawful to withdraw life-sustaining ventilation.

A momentary look of wistfulness passed over the face of MB’s mother as the judge listed five possible options, one of which was to allow the child to die peacefully in his parents’ arms – the one favoured by the paediatricians. The parents have fought long and hard against the received medical wisdom of the case, even though, as the judge said, they may be deluding themselves that their son has a future.

At long last, Mr Justice Holman gave his ruling that the boy shall live, if not perhaps for long.

In that case, the judge in England did not shift the goal posts on what constitutes death or the conditions under which people are ‘allowed to die.’

But people might be surprised to know that a similar situation had occurred in the US and that doctors and hospitals were allowed to override the family’s will. Remarkably, this little noticed event took place on March 15, 2006 during the high point of the events surrounding Terri Schiavo.

While Americans were riveted by dramatic events unfolding in Pinellas Park, Fla., a five-month-old Houston baby took his last breath after a hospital let him die despite his mother’s objections.

Sun Hudson was born Sept. 25 with thanatophoric dysplasia, an incurable and fatal form of dwarfism. Doctors said his tiny lungs would never fully grow and that he would never breathe on his own.

Hudson’s mother, Wanda, put up a fight when doctors advised removing Sun from a respirator. She said she did not believe in sickness or death. (my italics)

This was the first time that life support was removed over the objections of the legal guardian and without any advance directives from the patient, such as a living will. Perhaps the ultimate irony, if not outright hypocrisy, was that this Texas law was signed in 1999 by then Governor George W. Bush. The baby Sun Hudson was allowed to die in Texas against the wishes of his mother because of a state law then-Governor Bush signed, on the very same day that now-President Bush dramatically cut short his vacation and flew back to Washington to sign the federal law that supported the parents’ right to keep life support continuing for Terri Schiavo.

The doctors were able to override the mother’s wishes on March 15, 2005 because the case took place in Texas and that state has a law that authorizes doctors and hospitals to override the wishes of the patient’s families. The hospital took this action under the The Texas Advance Directives Act (1999), also known as the Texas Futile Care Law, which according to Wikipedia, “describes certain provisions that are now Chapter 166 of the Texas Health & Safety Code. Controversy over these provisions mainly centers on Section 166.046, Subsection (e), which allows a health care facility to discontinue life-sustaining treatment against the wishes of the patient or guardian ten days after giving written notice.”

As with the case of shifting the definition of death from heart dead to brain dead, serious ethical issues are raised by this act. There are concerns that this law was passed because hospitals did not want to shoulder the cost of maintaining life support for patients who cannot pay for it. Although the law (as I read it) does not explicitly say that the inability to pay for life support can be a reason for termination of services, it is easy to see that financial considerations are going to come into play.

It is unlikely that patients who have rich families who can pay the bills are going to have their wishes overridden and life support removed. But one can see why hospitals, which have become businesses, would not like the prospect of indefinitely providing expensive life support care if they have no hope of being reimbursed. What adds further suspicion to the view that commercial concerns are significant is that if another hospital is willing to accept the patient, then the patient can be shifted there. But it is unlikely that another hospital is going to accept a new patient who requires extensive life support when that patient is unable to pay.

This blatant hypocrisy and contradiction between Bush’s behavior as governor of Texas and as President later did not go completely unnoticed, though it did not get the attention it warranted. In an editorial on March 22, 2005, the Concord Monitor voiced concern over the implications of the Texas law:

On the same day President Bush interrupted his vacation and rushed to Washington to sign the Schiavo bill, a Texas hospital removed the breathing tube keeping 6-month-old Sun Hudson alive. According to The Houston Chronicle, the hospital’s action, the first of its kind, was made possible by a 1999 bill signed into law by Bush, then Texas’s governor.

That law allows hospitals to discontinue life-sustaining care even when doing so runs counter to the wishes of the patient’s guardians. Before ending the patient’s life under the law Bush signed, however, two conditions must be met. Doctors must deem that there is no chance for recovery and the patient must be unable to pay the hospital bill for continuing care. (my italics)

Added John Paris, a medical ethicist at Boston College, told Newsday “The Texas statute that Bush signed authorized the ending of the life, even over the parents’ protest. And what he’s doing here is saying, ‘The parents are protesting. You shouldn’t stop [treatment]'”

Apart from this being another example of Bush subordinating principle to political expediency, it also clearly shows that society is steadily lowering the bar on death, first making it a judgment of whether someone is ‘effectively dead’ and who gets to make that decision, and now coming down to the question of whether someone is worth keeping alive and putting that decision (at least in Texas) in the hands of doctors and hospitals and not parents and guardians. While the judgment that further treatment is futile may be a medical and scientific judgment, the decision to withdraw life support will undoubtedly be also driven by financial considerations as to whether the patients and their families can pay the cost of continued treatment.

To be continued. . .

POST SCRIPT: Canned bird hunts

When not shooting old friends in the face, ‘Deadeye Dick’ Cheney kills birds for fun, and has killed up to 70 pheasants in just one shooting session. What is more, the birds he shot were bred in captivity to make them easy targets and one wonders what kind of fascination he finds in personally slaughtering such a large number of tame birds.

The comic strip Doonesbury suggests one reason, and Nate Corddry from The Daily Show tries to find out what the thrill is by going on one such canned quail hunt and bringing back a report.

Changing notions of death-2: Persistent vegetative state

The next stage in the evolution of when death occurs (see part 1 on this topic) came with the tragic case of Nancy Cruzan.

In 1983, 25-year old Nancy Cruzan careened off the road, flipped over and was thrown from her car into a ditch. Nancy hadn’t breathed for at least 15 minutes before paramedics found and revived her – a triumph of modern medicine launching her family’s seven-year crusade to free Nancy from a persistent vegetative state.

Nancy Cruzan’s sad fate launched a fresh examination of death, centering around whether a person in a particular kind of coma, known as a persistent vegetative state, could be considered to be ‘effectively dead’ even if they did not meet the legal conditions of being heart dead or brain dead.
[Read more…]

Changing notions of death-1: Brain death

There is nothing more bracing than starting a new week with the cheery topic of death. I have been thinking about it since listening to noted ethicist Peter Singer’s excellent talk on The ethics of life and death on March 21. He pointed out that the answer to the question “When is someone dead?” is not simple.

Most of us know, by listening to the abortion debate in the US, how hard it is to get agreement on when life begins. Singer’s talk highlighted the other problem, one that does not get nearly as much attention, and that is the question of how we decide that someone is dead.

(Caveat: I could only stay for the first 45 minutes of his talk and did not take notes, so my use of the ideas in his talk is based on my memory. Peter Singer is not to be blamed for any views that I may inadvertently ascribe to him. But his ideas were so provocative that I had to share and build on them. I can see why he is regarded as one of the premier ethical thinkers.)

It used to be that the definition of death was when the heart stopped beating and blood stopped flowing. But that definition was changed so that people whose hearts were still beating but whose brains had no activity were also deemed to be dead.

This change was implemented in 1980 by the Uniform Determination of Death Act, which was supported by the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. This act asserts that: “An individual, who has sustained either (1) irreversible cessation of circulatory and respiratory functions, or (2) irreversible cessation of all functions of the entire brain, including the brainstem, is dead. A determination of death must be made in accordance with accepted medical standards.”

Why did this change come about? Singer says that the background to this change raises some serious ethical questions. Thinking about changes in the definition of death was triggered by the first heart transplant operation done in 1967 by Dr. Christian Barnard in South Africa. Suddenly, the possibility of harvesting human hearts and other organs of dead people for use by others became much more realistic and feasible. But if you waited for the heart to stop beating to determine death, then that left you very little time to get a useful organ (because organs decay rapidly once blood stops flowing), whereas if people were merely ‘brain dead’ than you could get organs while they were still fresh and warm, since the circulatory system was still functioning at the time of removal.

Thus the first heart transplant in 1967 was the main impetus for the formation in 1968 of an ad hoc committee on brain death at Harvard Medical School, which laid the foundation for the shift in the definition of death that occurred in 1980 which provided criteria that described determination of a condition known as “irreversible coma,” “cerebral death,” or brain death.

Note that the change in the definition of death was not due to purely better scientific knowledge of when people died. All that science could say was that from past experience, a person who was ‘brain dead’ had never ever come back to a functioning state. It seems like the decision to change the definition of death was (at least partly) inspired by somewhat more practical considerations involving the need of organs for transplants.

But while the circumstances behind the change in the definition of death raises serious ethical questions, the idea that someone who was ‘brain dead’ was truly dead was a defensible proposition, whatever the reasons for its adoption.

To be continued. . .

POST SCRIPT: Quick! Get back in the closet!

Some time ago, I expressed surprise that some atheists felt uneasy about ‘coming out of the closet.’ But a new University of Minnesota study suggests that there may be good reason for their hesitancy.

From a telephone sampling of more than 2,000 households, university researchers found that Americans rate atheists below Muslims, recent immigrants, gays and lesbians and other minority groups in “sharing their vision of American society.” Atheists are also the minority group most Americans are least willing to allow their children to marry
. . .
Many of the study’s respondents associated atheism with an array of moral indiscretions ranging from criminal behavior to rampant materialism and cultural elitism.

These results are quite amazing. Of course, such negative stereotypes usually arise from ignorance so maybe if people encountered more atheists and saw how ordinary they are, this view could be dispelled. But it is interesting how so many people feel that god is so integral to their “vision of American society.” America seems to be a theocracy, in fact, if not legally.

Grade inflation-3: How do we independently measure learning?

Recall (see here and here for previous postings) that to argue that grade inflation has occurred, it is not sufficient to simply show that grades have risen. It must be shown grades have risen without a corresponding increase in learning and student achievement. And that is difficult to do because there are really no good independent measures of student learning, apart from grades.

Some have argued that the SAT scores of matriculating classes could be used as a measure of student ‘ability’ and could thus be used to see if universities are getting ‘better’ students, thus justifying the rise in grades.

But the use of SAT scores as a measure of student quality or abilities has always been deeply problematic, so it is not even clear that any rise in SAT scores of incoming students means anything. One reason is that the students who take the SAT tests are a self-selected group and not a random sample, so one cannot infer much from changes in SAT scores. Second, SAT scores have not been shown to be predictive of anything really useful. There is a mild correlation of SAT scores with first year college grades but that is about it.

Even at Case, not all matriculating students have taken the SAT’s. Also the average total SAT scores from 1985-1992 was 1271, while the average from 1993-2005 was 1321. This rise in SAT scores of incoming students at Case would be affected by two factors, the first being the re-centering of SAT scores that occurred in 1995. It is not known whether the pre-1995 scores we have at Case are the original ones or have been raised to adjust for re-centering. This lack of knowledge makes it hard to draw conclusions about how much, if at all, SAT scores have risen at Case.

Alfie Kohn cites “Trends in College Admissions” reports that say that the average verbal-SAT score of students enrolled in all private colleges rose from 543 in 1985 to 558 in 1999. It is also the fact that it was around 1991 that Case instituted merit scholarships based on SAT scores and started aggressively marketing it as a recruiting tool. So it is tempting to argue that there has been a genuine rise in SAT scores for students at Case.

Another local factor at Case that would influence GPAs is the practice of “freshman forgiveness” that began in 1987. Under this program, students in their first year would be “forgiven” any F grades they received and this F would not be counted towards their GPA. This is bound to have the effect of increasing the overall GPA, although a very rough estimate suggests only a 1-2% increase. This practice was terminated in 2005.

The Rosovsky-Hartley monograph points to the fact that many more students in colleges are now enrolled in remedial courses than was the case in the past, arguing that this implies that students are actually worse now. But again, that inference is not clear. Over the recent past there has been a definite shift in emphasis in colleges of now wanting to retain the students they recruit. The old model of colleges recruiting more students than they needed and then ‘weeding’ them out using certain courses in their first year, is no longer in vogue, assuming that there was substance to that belief and it is not just folklore.

Now universities go to great lengths to provide assistance to their students, beefing up their advising, tutoring, and other programs to help student stay in school. So the increased enrollment of students in remedial courses may simply be the consequence of universities taking a much more proactive attitude to helping students, rather than a sign of declining student quality. All these measures are aimed at improving student performance and are another possible benign explanation for any rise in grades. In fact, all these remedial and assistance programs could be used to argue that a rise in grades could be due to actual improved student performance.

Alfie Kohn argues that taking all these things into account, there is no evidence for grade inflation, that this is an issue that has been blown way out of proportion by those who have a very narrow concept of the role of grades in learning. Kohn says there are many reasons why grades could rise:

Maybe students are turning in better assignments. Maybe instructors used to be too stingy with their marks and have become more reasonable. Maybe the concept of assessment itself has evolved, so that today it is more a means for allowing students to demonstrate what they know rather than for sorting them or “catching them out.” (The real question, then, is why we spent so many years trying to make good students look bad.) Maybe students aren’t forced to take as many courses outside their primary areas of interest in which they didn’t fare as well. Maybe struggling students are now able to withdraw from a course before a poor grade appears on their transcripts. (Say what you will about that practice, it challenges the hypothesis that the grades students receive in the courses they complete are inflated.)

The bottom line: No one has ever demonstrated that students today get A’s for the same work that used to receive B’s or C’s. We simply do not have the data to support such a claim.

In addition to the factors listed by Kohn, psychologist Steve Falkenberg points out a number of other reasons why average grades could rise. His essay is a particularly thoughtful one that is worth reading.

Part of the problem in judging whether grade inflation exists is that we don’t know what the actual grade distribution in colleges should be. Those who argue that it should be a bell curve (or ‘normal’ distribution) with an average around C are mixing up a normative approach to assessment (as is used for IQ tests and SATs) with an achievement approach.

IQ tests and SATs are designed so that the results are spread out over a bell curve. They seek to measure a characteristic (called “intelligence'”) that is supposedly distributed randomly in the population according to a normal distribution. (This assumption and the whole issue of what constitutes intelligence is the source of a huge controversy that I don’t want to get into here.) So the goal of such tests is to sort students into a hierarchy, and they design tests that spread out the scores so that one can tell who is in the top 10% and so on.

But when you teach a class of students, you are no longer dealing with a random sample of the population. First of all, you are not giving your assessments to people off the street. The students have been selected based on their prior achievements and are no longer a random sampling of the population. Secondly, by teaching them, you are deliberately intervening and skewing the distribution. Thirdly, your tests should not be measuring the same random variable that things like the SATs measure. If they were, you might as well give your students their grades based on those tests.

Tests should not be measures of some intrinsic ability, even assuming that such a thing exists and can be measured and a number assigned to it. Tests are (or at least should be) measuring achievement of how much and how well a selected group of students have learned as a result of your instruction. Hence there is no reason at all to expect a normal distribution. In fact, you would expect to have a distribution that is skewed towards the high end. The problem, if it can be considered a problem, is that we don’t know a priori what that skewed distribution should look like or whether there is a preferred distribution at all. After all, there is nothing intrinsically wrong with everyone in a class getting As, if they have all learned the material at a suitably high level.

In fact, as Ohmer Milton, Howard Pollio, and James Eison write in Making Sense of College Grades (Jossey-Bass, 1986): “It is not a symbol of rigor to have grades fall into a ‘normal’ distribution; rather, it is a symbol of failure — failure to teach well, failure to test well, and failure to have any influence at all on the intellectual lives of students.”

There is nothing intrinsically noble about trying to keep average grades unchanged over the years, which is what those who complain about grade inflation usually want to do.

On the other hand, one could make the reasonable case that as we get better at teaching and in creating the conditions that make students learn better, and as a consequence we get students who are able to learn more, then perhaps we should raise our expectations of students and provide more challenging assignments, so that they can rise to greater heights. This is a completely different discussion. If we do so, this might result in a drop in grades. But this drop is a byproduct of a thoughtful decision to make learning better, not caused by an arbitrary decision to keep average grades fixed.

This approach would be like car manufacturers and consumers raising their standards over the years so that we now expect a lot more from our cars than we did fifty years ago. Even the best cars of fifty years ago would not be able to meet the current standards of fuel efficiency, safety, and emissions. But the important thing to keep in mind is that standards have been raised along with the ability to make better cars able to meet the higher standards.

But in order to take this approach in education, it requires teachers to think carefully about what and how we assess, what we can reasonably expect of our students, and how we should teach so they can learn more and learn better. Unfortunately much of the discussion of grade inflation short-circuits this worthwhile aspect of the issue, choosing instead to go for the quick fix like putting limits for the number of grades awarded in each category.

It is perhaps worthwhile to remember that fears about grade inflation, that high grades are being given for poor quality work, have been around for a long time, especially at elite institutions. The Report of the Committee on Raising the Standard at Harvard University said: “Grades A and B are sometimes given too readily — Grade A for work of no very high merit, and Grade B for work not far above mediocrity. … One of the chief obstacles to raising the standards of the degree is the readiness with which insincere students gain passable grades by sham work.”

That statement was made in 1894.

POST SCRIPT: Cindy Sheehan in Cleveland tomorrow

Cindy Sheehan will speak at a Cleveland Town Hall Meeting Saturday, March 25, 1-3 pm

Progressive Democrats of Ohio present Gold Star Mother and PDA Board Member Cindy Sheehan at a Town Hall Meeting on Saturday, March 25, 2006 from 1 – 3 p.m. at the Beachland Ballroom, 15711 Waterloo Road in Cleveland’s North Collinwood neighborhood. (directions.)

Topic: Examining The Cost of Iraq: Lives, Jobs, Security, Community

Panelists include:

US Congressman Dennis Kucinich, OH-10
Cindy Sheehan – Gold Star mother & activist
Tim Carpenter, National Director, Progressive Democrats of America
Francis Chiappa, President, Cleveland Peace Action
Paul Schroeder, NE Ohio Gold Star Father and co-founder of Families of the Fallen For Change
Farhad Sethna, Immigration attorney and concerned citizen

Grade inflation-2: Possible benign causes for grade increases

Before jumping to the conclusion that a rise in average grades must imply inflation (see my previous posting on this topic), we should be aware of the dangers that exist when we are dealing with averages. For example, suppose we consider a hypothetical institution that has just two departments A and B. Historically, students taking courses in A have had average grades of 2.5 while those in B have had 3.0. Even if there is no change at all in the abilities or effort of the students and no change in what the faculty teach or the way that faculty assess and grade, so that the average grades in each department remain unchanged, it is still possible for the average grades of the institution to rise, simply because the fraction of students taking courses in B has become larger.

There is evidence that this shifting around in the courses taken by students is just what is happening. Those who are convinced that grade inflation exists and that it is evil, tend to interpret this phenomenon as game playing by students, that they are manipulating the system, choosing courses on the basis of how easy it is to get high grades rather than by interest or challenge.

For example, the ERIC report says “In Grade Inflation: A Crisis in College Education (2003), professor Valen E. Johnson concludes that disparities in grading affect the way students complete course evaluation forms and result in inequitable faculty evaluations. . . Students are currently able to manipulate their grade point averages through the judicious choice of their classes rather than through moderating their efforts. Academic standards have been diminished and this diminution can be halted, he argues, only if more principled student grading practices are adopted and if faculty evaluations become more closely linked to student achievement.”

This looks bad and the author obviously wants to make it look bad, as can be seen from his choice of the word ‘manipulate’ to describe the students’ actions and the way he implies that faculty are becoming more unprincipled in their grading practices. But there is no evidence for the evil motivations attributed to such students and faculty. In fact, one can look at the phenomenon in a different way. It is undoubtedly true that students now have many more choices than they did in the past. There are more majors and more electives. When you offer more choices, students are more likely to choose courses they are interested in and thus are more likely to do better in them.

Furthermore, even if students are choosing courses partly based on their expectation of the grade they will receive in it, we should not be too harsh in our judgments. After all, we have created a system in which grades seem to be the basis for almost everything: admission to colleges and graduate schools, honors, scholarships, and financial aid. As I said, grades have become the currency of higher education. Is it any wonder that students factor in grades when making their choices? If a student tries to balance courses they really want to take with those that know they can get a high grade in order to be able to maintain the GPA they need to retain their scholarships, why is this to be condemned? This seems to me to be a sensible strategy. After all, faculty do that kind of thing all the time. When faculty learn that the NIH or NSF is shifting its grants funding emphasis to some new research area, many will shift their research programs accordingly. We do not pour scorn on them for this, telling them that they should choose research topics purely based on their interests. Instead, we commend them for being forward thinking.

It certainly would be wonderful if students chose courses purely on the basis of their interest or usefulness or challenge and not on grade expectations, but to put students in the current grade-focused environment and expect them to ignore grades altogether when making their course selection is to be hypocritical and send mixed messages.

What about the idea that faculty grading standards have declined and that part of the reason is that they are giving easy grades in order to get good evaluations? This is a very popular piece of folklore on college campuses. But this question has also been studied and the data simply do not support it. It does seem to be true that students tend to get higher grades in the courses in the courses they rate higher. But to infer a causal relationship, that if a faculty member gives higher grades they will get better evaluations, is wrong.

People who have studied this find that if a student likes a course and a professor (and thus gives good evaluations), then they will tend to work harder at that course and do better (and thus get higher grades) thus bringing about the grades-evaluations correlation that we see. But what tends to determine how much a student likes a course and professor seems to depend on whether they student feels like she or he is actually leaning interesting and useful stuff. Students, like anybody else, don’t like to feel they are wasting their time and money and do not enjoy being with a professor who does not care for them or respect them.

Remember that these studies report on general trends. It is quite likely that there exist individual professors who give high grades in a misguided attempt to bribe student to give good evaluations, and that there exist students willing to be so bribed. But such people are not the norm.

To be continued. . .

POST SCRIPT: And the winner is. . .

Meanwhile, there are Americans who have already have decided which country the US should invade next in the global war on terror, even if they haven’t the faintest idea where that country is on the globe or why it should be invaded. Even Sri Lanka gets a shot at this particularly dubious honor.

Here’s an idea for a reality show, along the lines of American Idol. It will be called Who’s next?. The contestants will be the heads of states of each country and this time their goal will be to get voted off because the last remaining country gets bombed and invaded by the US. The judges could be Dick Cheney (to provide the sarcastic put-downs a la Simon Cowell. Sample: “You think we’re going to waste our smart bombs on your dumb country?”), Donald Rumsfeld, and Condoleeza Rice.

Fox television, my contract is ready and I’m waiting for your call.