There is nothing that gets the juices flowing in higher education academic circles than the topic of grade inflation. Part of the reason for this passion may be because grades and test scores, and not learning, seem to have become the currency of education, dominating the thinking of both students and faculty. Hence some people monitor grades as an important symptom of the health of universities.
But what is curious is that much of the discussion is done in the absence of any hard data. It seems as if perceptions or personal anecdotes are a sufficient basis to draw quite sweeping conclusions and prescriptions for action.
One of the interesting things about the discussion is how dismayed some faculty get simply by the prospect that average grades have risen over time. I do not quite understand this. Is education the only profession where evidence, at least on the surface, of a rise in quality is seen as a bad thing? I would hope that like in every other profession we teachers are getting better at what we do. I would hope that we now understand better the conditions under which students learn best and have incorporated the results of that knowledge into our classrooms, resulting in higher achievement by students. Any other profession or industry would welcome the news that fewer people are doing poorly or that fewer products are rejected for not meeting quality standards. But in higher education, rising grades are simply assumed to be bad.
Of course, if grades are rising because our assessment practices are becoming lax, then that is a cause for concern, just as if a manufacturer reduces the rejection rate of their product by lowering quality standards. This is why having an independent measure of student learning and achievement to compare grades with has to be an important part of the discussion.
Grade inflation is a concept that has an analogy with monetary inflation, and to infer that inflation (as opposed to just a rise) in grades has occurred implies that grades have risen without a corresponding increase in learning and student achievement. But in much of the discussion, this important conditional clause is dropped and a rise in grades is taken as sufficient evidence by itself that inflation has occurred.
Let’s take first the question of whether average grades have actually risen. At Case, as some of you may know, beginning January 2003, the GPA cutoffs to achieve honors were raised to 3.56 (cum laude), 3.75 (magna cum laude), and 3.88 (summa cum laude) so that only 35% of students would be eligible for honors. (The earlier values were 3.20, 3.50, and 3.80 respectively.) This measure was taken because the number of people who were graduating with honors had risen steadily over the years, well above the 35% originally anticipated when the earlier bars were set.
A look at grade point averages at Case shows that it was 2.99 in 1975 (the first year for which we have this data), dropped slowly and steadily to 2.70 in 1982, rose to 3.02 in 1987, stayed around that value until 1997, and since then has oscillated around 3.20 until 2005, with the highest reaching 3.27 in 2001. The overall average for the entire period was 3.01 and the standard deviation was about 0.70. (I am grateful for this and other Case data to Dr. Julie Petek, Director of Degree Audit and Data Services.)
It is hard to know what to make of this. On the one hand we could start at the lowest point in the grade curve and say that grades have risen by half a letter grade from 1982 to 2005. Or we could start at 1975 and say that grades are fluctuating in the range 2.70-3.30, or about half a standard deviation about the mean of 3.0.
What does the data say nationwide? Henry Rosovsky and Matthew Hartley, writing in a monograph for the American Academy of Arts and Sciences are convinced that inflation has occurred. For evidence of grades increasing, they point to various nationwide surveys that show that average grades rose by 0.43 from 1960 to 1974; that in 1993 the number of grades of A- or higher was 26%, compared to 7% in 1969; and the number of C’s dropped from 25% to 9% in that same period; and that averages rose from 3.07 in the mid 1980s to 3.34 in the mid 1990s.
In this last result, it is interesting to note in another study that grades rose on average only at selective liberal arts colleges and research universities, while they declined at general liberal arts colleges and comprehensive colleges and universities, and in the humanities and social sciences
The Rosovsky-Hartley monograph has been criticized for depending on research that itself depended on studies that used surveys, and such studies can be questioned on the fact that it is not clear how reliable the responses to such surveys are, depending as they do on self-reporting.
A 2003 ERIC Digest of the literature on this topic finds results that that cast doubt on the basic question of whether average grades have even risen. For example, “Clifford Adelman, a senior research analyst with the U.S. Department of Education, reviewed student transcripts from more than 3,000 colleges and universities and reported in 1995 that student grades have actually declined slightly over the last 20 years.” (my emphasis). His study of 16.5 million graduates from 1999-2000 also found that 14.5% of these students received As while 33.5% received grades of C or lower.
What is significant about the Adelman study is that he used actual student transcripts, not surveys, and thus seems to me to be more reliable.
It seems from this data and other studies that average grades have not increased across the board, but it is plausible that they have increased at selective liberal arts colleges and research universities. The Rosovsky-Hartley monograph says that “In 1966, 22 percent of all grades given to Harvard undergraduates were in the A range. By 1996 that percentage had risen to 46 percent and in that same year 82 percent of Harvard seniors graduated with academic honors. In 1973, 30.7 percent of all grades at Princeton were in the A range and by 1997 that percentage had risen to 42.5 percent.”
Even though it has not been conclusively established suppose that, for the sake of the argument, we concede that at least at selective liberal arts colleges and research universities (such as Case) have seen a rise in average grades. Is this automatically evidence of grade inflation? Or are there more benign causes, such as that we getting better prepared and more able students now, or our teaching methods have improved? Another important issue is whether Case’s experience of rising grades is part of a national trend or an exception.
To be continued. . .
POST SCRIPT: Where’s the balance?
Over at Hullabaloo, Tristero catches the Washington Post in a blatant act of bias in favor of atheistic science. The Post article says:
Scientists said yesterday they have found the best evidence yet supporting the theory that about 13.7 billion years ago, the universe suddenly expanded from the size of a marble to the size of the cosmos in less than a trillionth of a second.
Tristero points out that the article completely fails to mention the controversy around this question, that there is another side to this story, that the big bang is “only a theory” since no one was there to actually observe this event and prove that it happened.
And not a word of balance from the other side, as if the sincere faith of millions of Americans in a Christian God didn’t matter at all to the Post’s editors.
I just hate it when the media reports carefully vetted scientific data as fact and not as just one of many valid points of view. I’m not asking for them to ignore the opinions of these so-called scientists, but they really should report the fact there’s a lot of controversy about whether this kind of evidence is valid. Like, were you there, huh, Mr. Hotshot Washington Post?