John Bohannon of Science magazine has developed a fake science paper generator. He wrote a little, simple program, pushes a button, and gets hundreds of phony papers, each unique with different authors and different molecules and different cancers, in a format that’s painfully familiar to anyone who has read any cancer journals recently.
The goal was to create a credible but mundane scientific paper, one with such grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable. Submitting identical papers to hundreds of journals would be asking for trouble. But the papers had to be similar enough that the outcomes between journals could be comparable. So I created a scientific version of Mad Libs.
The paper took this form: Molecule X from lichen species Y inhibits the growth of cancer cell Z. To substitute for those variables, I created a database of molecules, lichens, and cancer cell lines and wrote a computer program to generate hundreds of unique papers. Other than those differences, the scientific content of each paper is identical.
The fictitious authors are affiliated with fictitious African institutions. I generated the authors, such as Ocorrafoo M. L. Cobange, by randomly permuting African first and last names harvested from online databases, and then randomly adding middle initials. For the affiliations, such as the Wassee Institute of Medicine, I randomly combined Swahili words and African names with generic institutional words and African capital cities. My hope was that using developing world authors and institutions would arouse less suspicion if a curious editor were to find nothing about them on the Internet.
The data is totally fake, and the fakery is easy to spot — all you have to do is read the paper and think a teeny-tiny bit. The only way they’d get through a review process is if there was negligible review and the papers were basically rubber-stamped.
The papers describe a simple test of whether cancer cells grow more slowly in a test tube when treated with increasing concentrations of a molecule. In a second experiment, the cells were also treated with increasing doses of radiation to simulate cancer radiotherapy. The data are the same across papers, and so are the conclusions: The molecule is a powerful inhibitor of cancer cell growth, and it increases the sensitivity of cancer cells to radiotherapy.
There are numerous red flags in the papers, with the most obvious in the first data plot. The graph’s caption claims that it shows a "dose-dependent" effect on cell growth—the paper’s linchpin result—but the data clearly show the opposite. The molecule is tested across a staggering five orders of magnitude of concentrations, all the way down to picomolar levels. And yet, the effect on the cells is modest and identical at every concentration.
One glance at the paper’s Materials & Methods section reveals the obvious explanation for this outlandish result. The molecule was dissolved in a buffer containing an unusually large amount of ethanol. The control group of cells should have been treated with the same buffer, but they were not. Thus, the molecule’s observed “effect” on cell growth is nothing more than the well-known cytotoxic effect of alcohol.
The second experiment is more outrageous. The control cells were not exposed to any radiation at all. So the observed “interactive effect” is nothing more than the standard inhibition of cell growth by radiation. Indeed, it would be impossible to conclude anything from this experiment.
This procedure should all sound familiar: remember Alan Sokal? He carefully hand-crafted a fake paper full of po-mo gobbledy-gook and buzzwords, and got it published in Social Text — a fact that has been used to ridicule post-modernist theory ever since. This is exactly the same thing, enhanced by a little computer work and mass produced. And then Bohannon sent out these subtly different papers to not one, but 304 journals.
And not literary theory journals, either. 304 science journals.
It was accepted by 157 journals, and rejected by 98.
So when do we start sneering at science, as skeptics do at literary theory?
Most of the publishers were Indian — that country is developing a bit of an unfortunate reputation for hosting fly-by-night journals. Some were flaky personal obsessive “journals” that were little more than a few guys with a computer and a website (think Journal of Cosmology, as an example). But some were journals run by well-known science publishers.
Journals published by Elsevier, Wolters Kluwer, and Sage all accepted my bogus paper. Wolters Kluwer Health, the division responsible for the Medknow journals, "is committed to rigorous adherence to the peer-review processes and policies that comply with the latest recommendations of the International Committee of Medical Journal Editors and the World Association of Medical Editors," a Wolters Kluwer representative states in an e-mail. "We have taken immediate action and closed down the Journal of Natural Pharmaceuticals."
Unfortunately, this sting had a major flaw. It was cited as a test of open-access publishing, and it’s true, there are a great many exploitive open-access journals. These are journals where the author pays a fee — sometimes a rather large fee of thousands of dollars — to publish papers that readers can view for free. You can see where the potential problems arise: the journal editors profit by accepting any papers, the more the better, so there’s pressure to reduce quality control. It’s also a situation in which con artists can easily set up a fake journal with an authoritative title, rake in submissions, and then, perfectly legally, publish them. It’s a nice scam. You can also see where Elsevier would love it.
But it’s unfair to blame open access journals for this problem. They even note that one open-access journal was exemplary in its treatment of the paper.
Some open-access journals that have been criticized for poor quality control provided the most rigorous peer review of all. For example, the flagship journal of the Public Library of Science, PLOS ONE, was the only journal that called attention to the paper’s potential ethical problems, such as its lack of documentation about the treatment of animals used to generate cells for the experiment. The journal meticulously checked with the fictional authors that this and other prerequisites of a proper scientific study were met before sending it out for review. PLOS ONE rejected the paper 2 weeks later on the basis of its scientific quality.
The other problem: NO CONTROLS. The fake papers were sent off to 304 open-access journals (or, more properly, pay-to-publish journals), but not to any traditional journals. What a curious omission — that’s such an obvious aspect of the experiment. The results would be a comparison of the proportion of traditional journals that accepted it vs. the proportion of open-access journals that accepted it… but as it stands, I have no idea if the proportion of bad acceptances within the pay-to-publish community is unusual or not. How can you publish something without a control group in a reputable science journal? Who reviewed this thing? Was it reviewed at all?
Oh. It’s a news article, so it gets a pass on that. It’s also published in a prestigious science journal, the same journal that printed this:
This week, 30 research papers, including six in Nature and additional papers published online by Science, sound the death knell for the idea that our DNA is mostly littered with useless bases. A decade-long project, the Encyclopedia of DNA Elements (ENCODE), has found that 80% of the human genome serves some purpose, biochemically speaking. Beyond defining proteins, the DNA bases highlighted by ENCODE specify landing spots for proteins that influence gene activity, strands of RNA with myriad roles, or simply places where chemical modifications serve to silence stretches of our chromosomes.
And this:
Life is mostly composed of the elements carbon, hydrogen, nitrogen, oxygen, sulfur, and phosphorus. Although these six elements make up nucleic acids, proteins, and lipids and thus the bulk of living matter, it is theoretically possible that some other elements in the periodic table could serve the same functions. Here, we describe a bacterium, strain GFAJ-1 of the Halomonadaceae, isolated from Mono Lake, California, that is able to substitute arsenic for phosphorus to sustain its growth. Our data show evidence for arsenate in macromolecules that normally contain phosphate, most notably nucleic acids and proteins. Exchange of one of the major bio-elements may have profound evolutionary and geochemical importance.
I agree that there is a serious problem in science publishing. But the problem isn’t open-access: it’s an overproliferation of science journals, a too-frequent lack of rigor in review, and a science community that generates least-publishable-units by the machine-like application of routine protocols in boring experiments.
Sastra says
If this was really an experiment, Bohannon would have discretely submitted one of the studies to Science Magazine. And let the chips fall where they may.
Woosters, pseudoscientists, and the spiritual/religious too often use the results of this sort of study to play a tu quoque, smugly pointing out that hey, science isn’t very rigorous either so it’s not like they’re doing anything wrong (“Okay, maybe I broke the lamp but look what brother Billy did so don’t punish me.”)
But of course a lack of rigor is a flaw in science. The open-access journals should be ashamed. They need to improve .
I’m curious as to their reactions though. Will they admit fault and resolve become more stringent? Or will they do a lot of woo-style whining about how the test wasn’t fair.
PZ Myers says
Some did whine that the test wasn’t fair — that they rely on trust with their scientific submitters. (NEVER TRUST ANYONE, I say).
The problem is that the sample was pre-loaded to bias it towards many unethical publishers, and it’s true — 90% of everything is crap. I really would want to see an unbiased sample that also investigated the traditional journals.
One problem with doing controls like that, though, is this statement:
Right there, he’s loaded his papers with a cue that will get them rejected by the big name journals, which tend to favor known names and prestigious institutions. If you’re White Man With Famous Science Prize publishing from Lab at Ivy League Institution, it’s easier to get your paper accepted. Even if your paper is crap.
Reginald Selkirk says
It sounds like they are building on the Bollywood model. Do all of the papers have to have musical dance numbers?
Sastra says
PZ #2 wrote:
But the same problem arises for both the Sokol hoax and the Bohannon one: the real issue isn’t the lie about the source or even the experiment, but the general badness of the paper itself. It was seriously flawed. There are plenty of sincere, honest people who fool themselves into thinking they are doing good science (or good literary theory) when they are not. They’re perfectly trustworthy in the sense that they’re not deliberately and knowingly lying by the normal standards. But they’re incompetent.
Journals ought to expect incompetence. That’s why there is peer review.
Hmm. Sounds like it’s time for White Man With Famous Science Prize to borrow the generator.
He just better document up front that this is what he’s doing…
Brandon says
So, my takeaway is that you can publish a crappy paper in an Indian open-access journal. That doesn’t really sound like an interesting finding.
F [is for failure to emerge] says
And peer review then is what, proofreading?
I guess they just trust their reviewers, too.
PZ Myers says
And a crappy Elsevier journal, and a crappy Kluwer journal. The finding is that bad journals exist, which isn’t too surprising.
Alex says
I find the differences between fields very fascinating. In my neck of the woods (high energy physics theory) there are roughly 5 journals that get 95% of publications. The Idea of having 100s of journals that would be suitable for a particular paper is completely alien to me. I wonder whether that is simply due to the sheer size of the field. In any case, HEP is a raher closely knit community I suppose, which has its advantages, but maybe also disadvantages.
Brandon says
Yeah, fair enough.
I’d rather this not be the case, but I don’t think I can bring myself to get too wound up about it. I don’t think anyone’s building a real impressive career on the back’s of no-impact factor open-access journals that are barely heard of. I’m not saying they need to be published in Science before I care, but at least something like Infection and Immunity level.
David Marjanović says
I regularly get spam from alleged journals that want me to submit manuscripts.
Wow. That’s hardcore.
Exactly.
Oh, I was going to whine that reviewers, myself included, simply don’t expect fraud* – the purpose of peer review is to check whether the conclusions in a manuscript follow from the data, not whether the data are made up.
But then I read on. The conclusions didn’t follow from the data, and this wasn’t even hidden in a huge Excel file in the Supplementary Information or something – it was plain from just looking at the figures. Boooo.
* Actually, those in fields where real money is involved might. I’m just far from any of those.
A professor of engineering told me that this is really bad in his field. In fact, he said, he had students of his (whom he then promptly kicked out) submit manuscripts with his name on them – without his knowledge!
Not sure about the papers, but everything else does…
Exactly.
I like this idea :-)
Who reviews the reviewers?
kantalope says
The Nature podcast interviewed the author: http://www.nature.com/nature/podcast/
It is interesting. The experiment didn’t include submission to a more traditional because the original investigation was about how some journals start adding on extra fees as you get closer to publication: cha-ching.
katiemarshall says
To be fair, I’ve also had troubles with peer review when it’s clear the reviewer didn’t spend a lot of time reading the paper or examining the data. My feeling is its because people end up spending an awful lot of time reviewing papers…which they do for free.
kantalope says
aak i listen to too many podcasts and maybe I should do some fact-checking before hitting submit — it was science mag podcast: http://www.sciencemag.org/site/multimedia/podcast/
but you should listen to the nature podcasts too….and don’t let kitteh jump on your lap if you are trying to make intelligent posts.
Antiochus Epiphanes says
*nods*
Fraudulent MSs created by someone with expertise wouldn’t be easily detectable.
David Marjanović says
Cancer research is probably pretty much unique in that a lot of it is done, likely more than the existing journals can keep up with. It’s the field that’s being funded after all.
How important the impact factor is differs a lot between countries and to a lesser extent between disciplines. I couldn’t afford to publish in a journal that was just founded and doesn’t have any impact factor yet.
David Marjanović says
Mass spectrographs? Yeah.
Anyway, look at this spam I got two days ago! :-)
Boldface and italics not included because I’m too lazy; red and blue not included because I can’t; e-mail addresses kept in because I want them to get spam. They deserve it.
Nope, I’ve never worked or published in any chemical or pharmaceutical sciences. Some bot found a scientific paper with me as the corresponding author, that’s all.
chris61 says
Back in the day when one had to physically haul oneself to the library to browse the journals or search through Index Medicus to find articles that one wanted to read, I’d have resented both the experiment and the Science article that resulted as a waste of my time. But now that journals can be both browsed and searched online, that 90% of everything published is crap doesn’t bother me so much. In fact I think some of that crap is useful for teaching students (a) not to believe everything they read just because it’s published and (b) that you can save yourself a lot of time by looking at the figures and the methodology and drawing your own conclusions before bothering to read what the authors say about their data.
unclefrogy says
17
I thought the point of per review was to check out the paper to determine if figures and methodology were any good in the first place not just the conclusions
uncle frogy
Theron Corse says
I heard about this and immediately wondered why he thought it necessary to make these papers seems to have come from Africans working at African institutions? Huh?
busterggi says
Melba Ketchem should sue him for plagarism.
maxdevlin says
Hence, the omission is not simply curious but significant. In fact, it reflects the very nature of experiments about human behavior, because they are all themselves simply examples of human behavior rather than the empirical quantitative analysis they pretend to be. No controls and they think it is informative? It would be pathetic if it weren’t so typical.
garydargan says
You also have to wonder how much the production of lowest common denominator papers and the proliferation of journals is related to performance measures which include publication of research papers at the expense of other activities such as teaching and public outreach.
demonhauntedworld says
The Sokal affair was less about whether pomo lit crit had a rigorous peer review process and more about the fact that the field was not only filled with meaningless gobbledygook, but virtually [i]depended[/i] on deliberate obscurantism as a signifier of credibility.
The point was not that the Sokal paper got through, but that the Sokal paper was indistinguishable from the dreck that others were passing off as serious scholarship.
Ichthyic says
Indeed. Given that idea though… isn’t it about time that David Sloan Wilson fessed up?
;)
Ichthyic says
An early example of Poe’s Law?
Ichthyic says
harder for most to track and verify the reality of the institutions and authors listed in the papers.
just that simple.
don’t read too much into it.
Ichthyic says
but, the fact that it is EASY to vet people and institutions and look at citations on the internet now should make you come to the exact OPPOSITE conclusion:
since it is sooo much easier now to actually go and check out information, there should be LESS crap being published, not more.
PZ Myers says
#23, demonhaunted world:
The point was not that the Bohannon paper got through, but that the Bohannon paper was indistinguishable from the dreck that others were passing off as serious scholarship.
demonhauntedworld says
Apples and oranges, PZ. Is it possible (in either theory or practice) to readily distinguish a legitimate oncology paper from Bohannon’s? Yes. Was it possible to distinguish Sokal’s paper from a “legitimate” litcrit paper? No.
Kevin Henderson says
Published science (open or not) is getting much easier to ignore and parse. There is much less harm than many perceive with irreproducible results. Specialized communities quickly discern novelty from deception, innovation from monotony. In some fields of science there is almost no room for mistakes, e.g., space or weapons.
Pragmatism and money always win in the end.
____
An engineering firm that builds a faulty bridge based on an overfitted model will be sued or fined out of existence; to date, we know of no ecological theorist whose similarly overfitted model has evoked comparable penalties. Because society demands little from theoretical ecology, one can have a successful lifetime career in the field without any of one’s theories being put to the practical test of actual prediction.
Ginzburg, L.R. and Jensen, C.X.J. (2004) Rules of thumb for judging ecological theories. Trends Ecol. Evol. 19, 121–126
Ada says
@29
IMO, the answer to your second question is also yes. So I’m curious: on what evidence do you base your belief that it wasn’t possible?
demonhauntedworld says
Because countless other people who were (and perhaps still are) regarded as luminaries in the field were writing stuff that literally amounted to nonsense? I’d refer you to both Gross & Levitt’s Higher Superstition and Gross, Levitt, and Lewis’ Flight from Science and Reason for examples.
consciousness razor says
demonhauntedworld, #29:
Bullshit. What the hell do you mean by “serious scholarship” in #23, if it isn’t a legitimate litcrit paper?
Are they supposed to be science in order to count as “legitimate”? What exactly do you think they’re trying to do … give scientific explanations of physical phenomena?
demonhauntedworld says
Answered in #32. To give an example from Higher Superstition, Steven Best, who co-authored Postmodern Theory: Critical Interrogations asserted that: “The dialectic between order and disorder also suggests a reevaluation of the Law of Entropy, no longer viewed simply as system decay and breakdown but as creations of new forms of order.”
As Gross and Levitt point out “Unfortunately for the gravamen of his argument, this realization represents no breakthrough inspired by chaos theory. The formation of the orderly arrangement of a snowflake, for instance, from an unordered collection of water molecules is, in fact, an entropy-increasing process, a fact that is quite well understood in classical thermodynamics and is, again, taught in elementary courses.”
Best also attempts to explain the difference between linear and nonlinear mathematics by saying: “Unlike the linear equations used in Newtonian and even quantum mechanics…”
But Gross and Levitt point out that “Newtonian laws of celestial mechanics are expressed by a decidedly nonlinear system of ordinary differential equations.”
consciousness razor says
No, it isn’t. A relevant answer would be something like, “okay, my bad, it’s possible to distinguish bullshit from legitimate scholarship,” or “here’s why it’s not possible to do that.” You just give more examples of bullshit and incorrect claims. That would be a strange way of conceding the point that you were wrong.
tyroneslothrop says
Sokal didn’t have the courage of conviction to submit his paper for peer-review. Social Text did not peer-review. There was an editorial review (and here Sokal and the editors disagree about the details).
I find Bohannon’s sting just as distasteful as Sokal’s fraud. Both submitted work in bad faith. As academics we submit our work in good faith. When we violate that good faith, we forfeit our ability to be taken seriously as scholars. And what did we learn here? Nothing. Who did not know that we’ve turned academic publishing into a for-profit scam? I didn’t need Bohannon wasting the time of peer-reviewers (already over-worked) to show me that. Nor did I need Sokal showing that if you submit something in bad faith to a non-peer-reviewed journal, it can get published.
demonhauntedworld says
So pointing out examples of people who were/are considered serious postmodern scholars writing abject nonsense is insufficient proof that it’s not possible to distinguish bullshit from what is/was claimed to be legitimate scholarship?
Ok, then.
demonhauntedworld says
One of these quotes is from Sokal’s paper. One is from a legitimate pomo scholar. One is randomly generated from a computer program. Without Googling, can you tell which is which?
1:
2:
3:
firstapproximation says
When asked about the Bogdanov affair (a controversy about some horrible physics papers that somehow got published) Sokal used a different idiom:
Apparently, he was “almost disappointed” that the authors were not pulling a hoax like the one he did.
Whatever the merits of postmodernism, it’s definitely true that too much junk science gets published. I would like to see this experiment carried out with traditional journals.
consciousness razor says
I’m not a literary scholar, but a musician. Would you tell me if you know whether or not there’s legitimate music scholarship? Whatever your answer, how do you think you know that? Or if you claim ignorance, what kind of ignorance do you think you have about literary scholarship as a whole?
But I’ll give it a go anyway.
#1 is generated.
#2 is from Sokal.
#3 is from an actual scholar. (But no claim from me about “legitimacy” for this weird little excerpt.)
If I’m right, it proves nothing. If wrong, it proves nothing.
demonhauntedworld says
#3 is indeed from Lacan, but you still lose, because Lacan is (objectively) talking nonsense:
This lack of distinction between parody and “real” writing in terms of intelligibility and objective wrongness is exactly my point. Lacan is as out of his depth when discussing mathematics as Deepak Chopra is when discussing quantum theory.
So what would you accept as a demonstration that what many pomo scholars were writing around the time of the Sokal affair is indistinguishable from nonsense? Is it a question of the number of examples I can provide, the alleged legitimacy or status of the people being quoted, whether or not they are objectively wrong (or even intelligible), or something else?
As Sokal and Bricmont said in Fashionable Nonsense:
demonhauntedworld says
If we’re going to use an analogy, what I’m claiming is that a person playing notes at random is indistinguishable from what trained musicians would identify (or write) as music – but even that is a poor analogy, because one cannot objectively say what is and is not music – whereas one can objectively determine when what someone writes does not correspond to reality (and please, let’s not get sidetracked into a postmodern argument over what “reality” means in this context).
#3 is indeed from Lacan, but you still lose, because Lacan is (objectively) talking nonsense:
This lack of distinction between parody and “real” writing in terms of intelligibility and objective wrongness is exactly my point. Lacan is as out of his depth when discussing mathematics as Deepak Chopra is when discussing quantum theory.
So what would you accept as a demonstration that what many pomo scholars were writing around the time of the Sokal affair is indistinguishable from nonsense? Is it a question of the number of examples I can provide, the alleged legitimacy or status of the people being quoted, whether or not they are objectively wrong (or even intelligible), or something else? You seem to be hung up on the word “legitimate”.
As Sokal and Bricmont said in Fashionable Nonsense:
Rutee Katreya says
So you think throwing this out is useful after an article that specifies how some of a particular class of science journal couldn’t do this for science? Are you on drugs?
(FYI: #1 is pretty clearly generated. A reasonable command of english and the most cursory knowledge of any one of the names dropped makes it abundantly clear)
demonhauntedworld says
Bohannon’s experiment was qualitatively different from Sokal’s. At least Bohannon’s paper made syntactic sense even if the methods were wrong. Sokal’s paper was gibberish through and through. As Sokal and Bricmont said in Fashionable Nonsense:
I find nothing reinforces a good argument more than a gratuituous insult, don’t you?
Nick Gotts says
PZ has given a clear explanation of the ways in which the fake cancer research papers could and should have been distinguished from genuine science. Could one of the defenders of postmodern literary theory tell us how the editors of Social Text could and should have been able to tell that Sokal’s submission was not genuine postmodern literary theory?
demonhauntedworld@32,
I agree with consciousness razor’s attributions@40: (1) is indeed clearly generated, but I’m only assigning (2) to Sokal because I know his paper’s title included “quantum gravity”; at least without additional context, it makes a lot more sense than (3).
Nick Gotts says
More likely, it demonstrates that they didn’t want to do it, because they had a direct pecuniary interest in not doing so.
Nick Gotts says
Actually, maybe I answered my own question@42:
:-p
consciousness razor says
Funny, Nick.
Of course, the question they should have asked is not whether it counts as “genuine,” in the sense of the author being an actual postmodernist instead of a fake (or obviously a text-generating program). They ought to ask (among other things) whether it makes sense, says something meaningful, or is useful: whether it is legitimate as scholarship, not simply whether the author is legitimately who they claim to be. You’re not suggesting Sokal’s paper is actually sensible or meaningful or useful, are you?
Rutee Katreya says
I’m pretty uncomfortable assuming every single rejection was by a principled journal, and every single acceptance was by people motivated by money.
Kagehi says
Going to play devils advocate here and say that this is a bit like complaining that someone who studied, “The effects on toxin X on frog in the lower basins of Froganistan.”, wasn’t doing proper science because they failed to include in their study the effects of the same thing on frogs in Florida, Louisiana, Alaska and, oh.. also Greenland. Call in a preliminary study, i.e., “This confirms that complete crap can be successfully published through questionable journals, but a few surprised the experimenter.” We all want to see the followup, where the result is replicated in conditions which are, in theory, less prone to already being excessively impacted by the introduction of linguistic effluent into the ecosystem. But, that doesn’t make studying the impact of such sewage in already poisoned systems “unscientific” for having simply, “not been broad enough”.
David Marjanović says
Explained in the quote in the OP:
“The fictitious authors are affiliated with fictitious African institutions. I generated the authors, such as Ocorrafoo M. L. Cobange, by randomly permuting African first and last names harvested from online databases, and then randomly adding middle initials. For the affiliations, such as the Wassee Institute of Medicine, I randomly combined Swahili words and African names with generic institutional words and African capital cities. My hope was that using developing world authors and institutions would arouse less suspicion if a curious editor were to find nothing about them on the Internet.”
I guess another option would have been to use real Chinese institutions (except the most widely known ones) and three random real Chinese syllables per name. A search would have turned up websites, but they tend not to come with English versions.
(I once tried to find one particular geologist so I could suggest him as a reviewer for a manuscript of mine. Couldn’t find him even after I figured out how his name is written in Chinese characters.)
…How common was peer review in that field back then? Were there any peer-reviewed journals in it? Indeed, are there now?
2 has to be from Sokal’s paper because it’s about physics.
At first I thought 1 was real and 3 was from the pomo generator, because 1 looks coherent and 3 doesn’t. But I don’t think the generator knows (even) that much French, and 1 is actually wholly disjointed – just between the sentences and not within them!
So would I!
Peer review isn’t done by journals.
Reviewers don’t belong to journals. They’re chosen by editors for each manuscript, usually picking from a list of suggestions the authors provided as well as from the journal’s “editorial board” – a peculiar name for a list of people the editors know are generally able to review manuscripts in the field the journal is about but who aren’t paid by or otherwise affiliated with the journal.
Yep.
Quite the opposite: the claim here is that the genuine papers, one of which is quoted in comment 34, isn’t sensible or meaningful or useful either!
The least principled journals probably appoint in-house editors, who share the financial interest and therefore don’t select qualified reviewers or ignore reviews that recommend rejection…
Otherwise, editors are selected much like reviewers are, and they’re usually not paid either.
I have no idea what you mean. There’s no reason to think that the same species (!!!) would be differently affected in different places. What’s your point here?
No control group. No way to compare the results for open-access journals, the only results that exist, with the hypothetical results from traditional paywalled ones.
random says
Retraction Watch also discussed this, and did give a bit of explanation from the author about why he did not submit this to an equal selection of traditional journals as a control:
“I did consider it. That was part of my original (very over-ambitious) plan. But the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample? Instead, this just focused on open access journals and makes no claim about the comparative quality of open access vs. traditional subscription journals.”
http://retractionwatch.wordpress.com/2013/10/03/science-reporter-spoofs-hundreds-of-journals-with-a-fake-paper/
Nick Gotts says
consciousness razor@45,
No, I’m certainly not suggesting Sokal’s paper is sensible, meaningful or useful, although the extract given does make a simplistic but not absurd contrast between 19th century and contemporary science. What I meant by “genuine” was actually neither “by a recognized postmodernist scholar”, nor “sensible, meaningful and useful”, but “submitted in good faith”. Of course a lot of useless rubbish is submitted in good faith, but if a paper is designed to be useless rubbish, that should surely be detectable in any genuine field of scholarship. So i’ll repeat my genuine q
Nick Gotts says
consciousness razor@45,
No, I’m certainly not suggesting Sokal’s paper is sensible, meaningful or useful, although the extract given does make a simplistic but not absurd contrast between 19th century and contemporary science. What I meant by “genuine” was actually neither “by a recognized postmodernist scholar”, nor “sensible, meaningful and useful”, but “submitted in good faith”. Of course a lot of worthless junk is submitted in good faith, but if a paper is designed to be worthless junk, that should surely be detectable in any genuine field of scholarship. So I’ll repeat but elaborate on my question:
1) By what criteria should the editors of Social Text have been able to detect that Sokal’s paper was deliberately designed to be worthless junk?
2) By what criteria should the editors and reviewers of journals in postmodernist literary theory and allied fields be able to distinguish sensible, meaningful and useful contributions from worthless junk submitted in good faith?
Nick Gotts says
Sorry, #50 resulted from premature comment submission!
Kagehi says
Well, you are presuming “same species” or that the toxin isn’t interacting with something else, not present in the ecosystem you “are” examining. I would say it would be bad science, at least in “some” cases, to assume that the same effect would be seen someplace else, without understanding the mechanism. In the case of bad journalism, its the prevalence, in a sense, of the “mechanism” that is being tested, not the specific toxin, right?
reverie says
@51 Nick Gotts
Let me attempt to answer your two questions. Of course, these are just my answers; the issue of what evaluative criteria should be used and how they should be applied is a matter of extensive debate in the humanities.
The first thing they should do is actually conduct peer review. Include a scientist among the reviewers to double-check that the science itself is reasonably accurate. (You can’t expect a literary critic to have PhD-level knowledge of physics, so get someone who does. This step is probably not necessary for papers which examine the social ramifications of science or question the scientific method, since the questions involved in such work are not typically questions of scientific fact.)
There is no way to conclusively determine authorial intent, especially without information on the author’s life experiences and comparison with their other work. There is no way for any journal, in any subject, to distinguish between incompetence and deliberately designing something to be worthless junk. The only question a journal can be reasonably expected to answer is whether the work is of high quality.
Some criteria a journal might use for determining whether a work is of high quality in the humanities include: logical validity of arguments; persuasiveness of arguments; clarity of writing; beauty of metaphors; thought-provoking; original; written on a normatively important subject; responds to current debates in the field or re-opens previously settled debates in the field or otherwise furthers a conversation worth having; it is something people in the field would be interested in reading; it contributes to diversity
These criteria are subjective, as they are necessarily going to be if the article in question is about art and culture rather than about a chemical reaction. You can see, perhaps, how the editors might have thought that Sokal’s work fulfilled some of these criteria. It is a clearly written paper, on a hot topic, from an interesting point of view, from which we might learn something if the piece is put in conversation with others (a.k.a. refuted; it is absolutely possible to be “interestingly wrong” in the humanities, if you cause people to think up good counter-arguments or to reconstruct your argument). Of course, on further examination, it does appear to be mostly name-dropping and not much substance. But their main failure was in misunderstanding the science itself, which would be solved by actually including a scientist as a reviewer in an interdisciplinary journal or issuing a desk rejection for a disciplinary journal.
My answer to this is essentially the same. You can’t determine the intent of an anonymous author, so you can’t know whether it is submitted in good faith. You can assess the quality of the work based on some of the criteria I listed above.
To add to this: You are probably wondering what any of the above criteria have to do with knowledge. The place knowledge comes in is whether you understand the conversation that is already happening in the scholarly literature on your theory and object. One other way to distinguish someone who is serious and informed from someone who is not is to get a reviewer who specializes in the theory referenced in the paper. So, if a paper cites Marx, Lacan, and Zizek, you want a psychoanalytic (post)Marxist to review the paper. They can determine whether the arguments the author attributes to “Marx” or “Lacan” are actually consistent with the published works of Marx and Lacan. If you don’t know what you’re doing, you’re going to misunderstand the arguments made by the theorists you’re name-dropping, which should be obvious to anyone who’s studied the theorists in question. If someone presents an unorthodox reading of a theorist, they should justify why their reading is better (which typically means either more accurate or more useful) in the text of the article.
Another way to do it is to get a reviewer who is familiar with the text being analyzed rather than the theory being used to critique it. So, if you have a psychoanalytic Marxist critique of the Pussy Riot controversy, you could get a scholar who is familiar with the history of women’s political activism in Russia. Such a scholar could assess whether the article over-states its claims or otherwise gets the context wrong. For instance, if the article argues that Pussy Riot is the only activist group in Russia to ever perform in a church, a scholar who is knowledgeable about activism in Russia could tell you whether that is actually true or not.
So, generally speaking, peer review in the humanities consists of two parts. First, experts who are familiar with the theory assess whether the reading of the theorist displays understanding of the arguments in primary and secondary texts. Second, experts who are familiar with the object (Finnegan’s Wake, U.S. presidential speeches, leftist political theater, Russian women’s movements, the art of Yoko Ono, etc.) consider whether the article makes a contribution to our understanding of that object by placing it in the context of previous work and generally accepted knowledge. During both parts, the quality of arguments and writing is assessed.
Does that clarify things?
Nick Gotts says
reverie@57,
Thanks for your thoughtful response.
I understand the editors of Social Text introduced peer review after the Sokal and Bricmont hoax. You really don’t need PhD knowledge of physics to notice that at least some of the alleged physics content is complete hooey – for example, the mention of the “morphogenetic field”.
I disagree: questions of social ramifications and method are not sharply distinct from those of scientific fact.
I disagree, in this specific case: Sokal points out several aspects of the paper that should have made it obvious it was a “spoof” here.
Yes, I’ve no disagreement with these, although some would clearly be inadequate taken alone: complete gibberish might “contribute to diversity”.
Logical validity of arguments isn’t.
No, it’s not a clearly written paper. Parts of it are clearly written, but as a whole, it’s a mess.
No, their main failure is precisely in not rejecting a paper that is, as you say, “name-dropping and not much substance”. My suspicion* is that this occurred because the bulk of work in postmodern literary and psychoanalytic theory** is “name-dropping and not much substance”, so these features failed to distinguish it. One of its common features is what one might call “term-dropping” – the appropriation of terms from current science and mathematics, without much if any understanding of their meaning within those disciplines – as copiously illustrated in the quotations from postmodernists in Sokal and Bricmont’s spoof.
True, but that raises the further question of whether that conversation has any worthwhile content. There are whole realms of pretended knowledge (theology, astrology, homeopathy, alien abductions) where one can certainly have scholarly knowledge of “the conversation that is already happening”, but which most of us here would dismiss as worthless – except as objects of study in what we might call epistemic pathology.
It’s interesting that you select this particular area of postmodern “theory”: both Marx and Freud considered themselves to be scientists, and believed that their theories described and explicated central features of reality – that they were objectively true, or at least, approximations to the truth.
*It’s a suspicion, since I haven’t done much reading in the area; whenever I’ve tried to do so, I’ve come up against what appears to be a forest of name-dropping and (apparently) deliberate obscurantism.
**Let me be clear that I’m not dismissing literary criticism in general, let alone the humanities as a whole.
reverie says
@Nick Gotts at 58
I don’t think we disagree all that much, but I’ll still answer a few of your points where I do think we differ.
They may not be sharply distinct, but they are distinct enough that I think some questions require a scientist as a reviewer and others do not. You don’t need to know the latest in neuroscience to discuss how phrenology is racist, because it is so widely accepted to be a psuedo-science and the refutations of it are easily comprehensible by non-scientists. An expert on 19th century scientific racism would be more than adequate to review a paper on the racist effects of belief in phrenology, and they would probably be better positioned to review such a paper than a contemporary biologist or neuroscientist, because they are familiar with the context of that historical period. Likewise, you don’t need a scientist to review a discussion of whether Kuhn’s criteria (accurate, consistent, broad scope, simple, fruitful) are better than is better than Vasquez’s (accurate, falsifiable, capable of evincing great explanatory power, progressive rather than degenerative in terms of the research program, consistent with what is known in other areas, parsimonious) for evaluating theories of international relations. You would want a philosopher of social science, who would have not only read Vasquez and Kuhn, but also Popper, Waltz, etc. A scientist could weigh in, but they wouldn’t really be an expert in this case, since most scientists don’t really read philosophy of science. Keep in mind, also, that demanding a scientist review a philosophy of science article is a very demanding standard, since most scientists do not want to review a paper about Popper and Kuhn or the racist social ramifications of phrenology. You have a limited pool of willing reviewers and it will be difficult to determine who they are.
reverie says
Shit, I screwed up the block quotes. Should have used the preview button . Here’s a re-post so it’s easier to read.
@Nick Gotts at 58
I don’t think we disagree all that much, but I’ll still answer a few of your points where I do think we differ.
They may not be sharply distinct, but they are distinct enough that I think some questions require a scientist as a reviewer and others do not. You don’t need to know the latest in neuroscience to discuss how phrenology is racist, because it is so widely accepted to be a psuedo-science and the refutations of it are easily comprehensible by non-scientists. An expert on 19th century scientific racism would be more than adequate to review a paper on the racist effects of belief in phrenology, and they would probably be better positioned to review such a paper than a contemporary biologist or neuroscientist, because they are familiar with the context of that historical period. Likewise, you don’t need a scientist to review a discussion of whether Kuhn’s criteria (accurate, consistent, broad scope, simple, fruitful) are better than is better than Vasquez’s (accurate, falsifiable, capable of evincing great explanatory power, progressive rather than degenerative in terms of the research program, consistent with what is known in other areas, parsimonious) for evaluating theories of international relations. You would want a philosopher of social science, who would have not only read Vasquez and Kuhn, but also Popper, Waltz, etc. A scientist could weigh in, but they wouldn’t really be an expert in this case, since most scientists don’t really read philosophy of science. Keep in mind, also, that demanding a scientist review a philosophy of science article is a very demanding standard, since most scientists do not want to review a paper about Popper and Kuhn or the racist social ramifications of phrenology. You have a limited pool of willing reviewers and it will be difficult to determine who they are.
I don’t think it’s quite as obvious as he points out. For example, the editors probably read his proclamations that there is no reality as proclamations that we can never really know reality and that language socially constructs our understanding of the world. A peer-reviewed humanities journal would likely ask him to tighten up his language, but even they would probably assume that his phrasing was just a bit careless. Since nobody these days really defends “we’re a brain in a vat” as a plausible scenario (even if it is a possible one), they probably just assumed he was repeating the more typical remark that human observation can be faulty. This is why the fact that Sokal was operating in bad faith is important. The editors read his paper generously, assuming he did not mean ridiculous things, but was just a bit careless of a writer. A GOOD, PEER-REVIEWED journal will reject a careless writer in the humanities, or at least ask them to revise and resubmit if the argument is really good and the writing just needs some tweaking. Social Text at the time was obviously not a good journal in this sense, but it’s easy to see how they could misread Sokal if they were trying to read generously (which is a common interpretive principle in the humanities).
Mostly I agree, because the standards for evaluating logical arguments are mostly well-established. However, there can be disputes where one reviewer thinks the author’s explanation of why b follows from a is persuasive, while a second reviewer does not think the explanation is sufficient to conclude that b follows from a. There may not be a right answer in cases like that. (And that’s not even going into the times when logic is not the best criteria for analyzing a given piece. For instance, a “logical” analysis of poetry would likely miss the point, which is why scholars of poetry often write in evocative ways. Since you and I might feel different things when we read a line of poetry, a scholar of poetry will have to describe their response to the poem in vivid enough terms that the reader can imagine associating feeling the way the scholar did when they read the line of poetry. This method would be a form of persuasion, but not logic.)
I do think you’re right that there is a problem with name-dropping instead of substance, at least at the margins, in some of the humanities. I think the problem is over-blown, however. People outside of these disciplines often see the jargon and name-dropping and think it is automatically without substance, but in many cases, references to well-known theorists and their terminology is a simply a short cut for referencing a system of complex concepts that the reader is assumed to be familiar with. Journal articles have word limits, and it is reasonable to think that someone reading an article titled “Lacan and political subjectivity: fantasy and enjoyment in psychoanalysis and political theory” has read Lacan or is at least familiar with his basic concepts. You’re probably right that it’s worst in interdisciplinary work. There are certainly some humanities scholars who study the science they’re engaging with seriously and deeply; there are others who are more careless. On the whole, I think papers published in top journals by experienced scholars mostly get it right, while conference papers by graduate students who have no scientific background mostly get it disastrously wrong. But there are still cases where someone who doesn’t know their science will slip through at a major journal because their reviewers aren’t familiar with scientific literature. I think this could be rectified by adding a reviewer from the relevant scientific discipline when appropriate.
Glad to hear that!