This is incredible. On a scale of 0-10 WTFs this is a 10.
From the most recent episode of Daniel Griffin’s COVID-19 Clinical Update on TWIV [twiv]@29.28:
We had some very disturbing news. Right, so: This is the time for monitoring and monoclonals, not the time for antibiotics. There was a bit of drama because a pre-print that had been posted suggesting that a randomized controlled trial demonstrated a 90% reduction in mortality in Ivermectin-treated patients may have been something that never happened.
A London medical student named Jack Lawrence was the Woodward and Bernstein of this story, if people remember All The President’s Men and he was investigating this pre-print and he discovered, initially, that the introductory section of the document appeared to have been almost entirely plagiarized and then run through a thesaurus program. And, it appears that the authors – they pulled entire paragraphs out of press releases and web sites about Ivermectin and then they changed these key words. The data appears to have been completely falsified, the tables in the article are mathematically impossible… Apparently the study never happened – they just invented the study, invented the data, and put it out there – really just a case of scientific fraud.
You know, this data was part of a lot of these meta-analyses where people were saying “Ivermectin looks so promising” and “It’s a crime against humanity that we’re not using it when there’s this study with a 90% mortality reduction” My hope is that all the people that embraced Ivermectin and are using it based on this kind of data will re-think their position and embrace the importance of honest, scientific investigation.
Now if someone is passionate about Ivermectin and they want to know – don’t just make up data. Don’t just invent a study.
That is shocking. Someone decided to do that, deliberately, knowing it was a bunch of lies that people were going to listen to and take very seriously. Some people may have pursued the Ivermectin avenue and been less strenuous about isolating and masking up, and gotten COVID-19 because of the fake article.
A deeper problem is revealed: the forces of ignorance and disinformation have realized that the scientific standard of “publication in a peer-reviewed journal” is a target for attack. In computer security terms [where I lived my life] we’d say “the trust model is broken” Once the attacker gets inside your trust boundary, they have mooted the very purpose of having a trust boundary in the first place. It totally makes sense for an attacker to attack your trust model because, if they succeed, they have taken over the system and it can no longer be relied upon to behave the way it was formerly expected to. Bullshit artists, creationists, anti-vaxxers, and other random assholes, have suborned the system. We can pat ourselves on the back and say, “the system is self-correcting! See, someone did catch it!” but it’s not self-correcting – it’s error-detecting. In this case, science has (arguably) detected that some garbage was injected into the system, but the garbage will not be “corrected” until it has been completely removed and everyone whose mind was poisoned by it has adjusted their belief systems. A good case in point is Andrew Wakefield’s fabrications: a lot of people haven’t been reached by the “delete that: Wakefield totally made all that up” message, so there is a “knowledge-base” out there that are ignorant and don’t know it.
The Scientific Method is a great epistemological tool, but it doesn’t fare any better than anything else in an environment that is saturated with disinformation. What do we do? What is the rational response? Well, if we were computer security people, we’d say that the root of our trust hierarchy was corrupted and therefore none of it is trustable, anymore. I’m not advocating that we throw science out the window, but this is a very uncool situation and we need to be skeptical of scientific “results” and not just accept “peer reviewed data” – sorry, scientists, but your gold standard turns out to be gold painted balsa wood. It particularly calls into question meta-analyses built on foundations of bullshit.
Why does someone do this? Is this just common-or-garden variety sociopathy? Is some asshole having a bad day and decided to spread their pain? I can’t think of any reason that makes sense except that, perhaps, some jerkwad is preparing a paper about how easy it is to defeat science’s trust model.
kestrel says
Agreed, that is a real WTF.
People will be shocked to know I use Ivermectin all the time. On my livestock. Because it’s an anthelmintic, or wormer product. Why someone would think using wormer for treating a virus would be a good idea is beyond me. I note that the FDA website quite clearly tells people NOT to use Ivermectin, and they even have a photo of a vet with a horse where they say this.
I’m not sure they have destroyed science, but yeah, there needs to be some serious sitting down and thinking about this, because yeah, a lot of people will be harmed by this nonsense.
Marcus Ranum says
kestrel@#1:
People will be shocked to know I use Ivermectin all the time.
It’s also good for scabies. I picked up a dose of that somewhere, years ago, and I finally did the math and figured out what a human-sized dose of the apple-flavored horse stuff was, scaled back a bit from that, and gacked down that dose on 3 separate days 3 days apart. Problem solved. Ivermectin is good for a bunch of different parasitic infections (and since I live on a farm and there is deer poo everywhere, it probably wasn’t bad to take ivermectin in case I have heart worms or dog knows what else. No tapeworm came out, thank god!)
maggie says
Marcus@#2 Not surprised that no tapeworms came out since Ivermectin isn’t very effective against them.
Rob Grigjanis says
Marcus @2:
Even that could be dodgy, and certainly should come with a “don’t try this at home”.
https://www.fda.gov/consumers/consumer-updates/why-you-should-not-use-ivermectin-treat-or-prevent-covid-19
flex says
I can offer two other explanations, one rather sordid, but the other illuminates an interesting aspect of human psychology.
The first one is pecuniary. For some reason the author thinks that if a belief in the effectiveness of Ivermectin for treating COVID spreads, they will make money. I think this is unlikely.
The second reason arises because of the difference between tested knowledge and revealed knowledge. I know that in these lucre-centric times a profit motive is the primary suspicion for the justification of a person’s actions. And in many cases our search for a profit motive will be rewarded. But I also think that a lot of the impassioned belief in clearly non-sensical ideas is really because the ‘truth’ of that belief was arrived at by a person using incomplete knowledge to create a logical chain of thought to reach an inaccurate conclusion. Once that logical chain of thought is created it almost always is strengthened. If someone is in agreement with the belief (either through ignorance themselves or through respect for the person who first promulgated it), that agreement strengthens the belief. Alternatively, if the belief is challenged, that is often seen as an attack on the person which automatically generates a defensive response (no one likes to admit they are wrong) which also strengthens belief.
Linus Pauling, who was not completely ignorant of chemistry, reached the conclusion that Vitamin C was a panacea for many ills. Medical research has not supported his views, but a lot of people still agree with Pauling, probably because he won two Nobel Prizes and was respected as being a smart man. If he had been Sam Wazoo, a pharmacist from Kalamazoo, the idea may not have been taken very seriously. (Of course, nowadays, where there is a humongous amount of money to be made in dietary supplements, the profit motive may be more of a factor in the continuing myths around dietary supplements.)
I know we make fun of Andrew Wakefield, and also express horror at how his mistaken belief that childhood vaccinations caused autism has help propel anti-vaccination beliefs into the mainstream. At least some of the COVID deaths should be laid at his feet. But if you look at his career, I think he noticed that there was a similarity in the timing of the MMR and the detection of autism and became convinced there was a possible connection. But his background was gastroenterology, not vaccines or autism. He had enough medical knowledge to posit a connection between the MMR and autism, but not enough medical knowledge of either to say that the connection was bogus. He was the recipient of revealed knowledge which did not withstand testing. This is probably a case where egotism plays a role, i.e. he presumed the medical knowledge he had was greater than it was and sufficient to reach a conclusion where more experienced researchers hadn’t found one.
The sin of Andrew Wakefield was not in publishing a paper which was subsequently retracted (although the data manipulation was certainly enough on it’s own to tarnish his name). Wakefield’s sin was not rejecting his revealed knowledge once it had been tested. Of course, by that time he was discredited medically and making a decent income from his anti-vaccination status, so he had a financial stake in continuing to believe something shown to be incorrect.
The difficulty people with revealed knowledge face is that many other people will want some proof, especially if the claim is testable. Whoever came up with the idea that Ivermectin was a useful treatment for COVID may well be convinced that it is, through their revealed knowledge. But they are also aware that in order for that belief to become accepted some sort of medical study must exist. Our fact-checkers have at least gotten sophisticated enough to look for that, probably temporarily.
We talk about how lies are more easily spread these days due to social media. I would hazard that many of the lies are not intentional lies, i.e. are known by the distributors are being untrue. Instead, the lies, the conspiracy theories, the miss-information, start with revealed knowledge. At some point, possibly early in the spread, some people will see an opportunity to monetize this belief and will continue to spread it even after it’s been completely debunked. Belief that the 2020 US Presidential Election was stolen has generated an entire cottage industry, shut that down and ‘The Big Lie’ would quickly become yesterday’s news.
The individual who came up with the bogus study on Ivermectin may have been simply trying to focus attention on his revealed knowledge. I can see how someone with limited medical knowledge could reach a belief that a parasite and a virus can be purged with the same medicine. But the people spreading this belief now are likely the usual actors who don’t care about whether the belief is true, but know that the stock market will react to the news of this bogus study and have invested to make money off of it.
You aren’t going to stop people from receiving revealed knowledge. People are just built that way. We all, at times, leap to unwarranted conclusions. You won’t be able to stop people from gossiping about revealed knowledge prior to it being tested. Again, people will accept revealed knowledge uncritically and spread it uncritically. That’s how religions are born. I don’t think that we, as a society or species, are any less susceptible to uncritically accepting revealed knowledge today than people four thousand years ago. Reformers have tried to change people before, and repeatedly failed.
We can, however, change society so that it’s harder to make a profit off of revealed knowledge. If, during the hydroxychloroquine fiasco, the stock price for makers of that drug was frozen by the FTC to prevent market manipulation I suspect it would have muted a lot of the hype. To the best of my knowledge the FTC doesn’t have that power, and the argument is that since the NYSE is a private organization trading stocks on the secondary market that the FTC shouldn’t have that power.
There are mechanisms society can establish to dampen the spread of misinformation and revealed knowledge (OFF WITH THEIR HEADS!). Some better than others (Aww, no head-chopping today). Yet I don’t see that the US general population has the will to insist on them, or the elite class has the desire to implement them.
As for the defeat of science’s trust model, science doesn’t (and never has) found a trust model which cannot be temporarily defeated (and the time span of ‘temporarily’ may be decades or longer). The trust model in science isn’t that unverified, revealed, knowledge won’t be incorrectly classified as proved fact on occasion. It’s that over time the unverified, revealed, scientific (i.e. testable) knowledge will be double-checked, verified, and corrected.
brucegee1962 says
I’m currently reading “The Misinformation Age” by O’Connor and Weatherall, which is about exactly this question — how is misinformation spread by malicious actors in both science and politics. It uses computer modeling studies as well as lots of examples.
cvoinescu says
As a budding scientist, I can assure you that almost nobody considers peer review to be the be-all, end-all. We read all papers carefully, with an eye out for bias, for what’s not included in the paper, for self-delusion, for bad statistics, for cherry-picking, for lousy citations, for stretched or overblown conclusions, and so on. Even large collaborations with lots of prestige and funding can put out bad papers. Just because it’s peer-reviewed doesn’t mean it’s true. We provisionally and conditionally accept the results, but always with an eye out for replication, contradictory results, and so on. Most of the time, we keep an eye on our own biases too: even if a result agrees with our intuition, it still needs a good standard of proof.
Then there are the COVID papers. Because of the urgency, the journals have been accepting and publishing COVID-related papers with much less than the usual standard of peer review. There’s usually a note attached, saying pretty much that. You ignore that warning at your own peril. We are about three orders of magnitude more careful with COVID-related papers, because most of them are bad — or just not very good: small observational study with necessarily imperfect methods finds effect that may or may not be due to chance or confounding factors. (Conversely, a common joke is to wonder out loud how one could include COVID in a paper they’re writing about their current research, to get it published more easily and/or in a more impactful journal.)
Jazzlet says
flex @#4
You give Wakefield too much credit, he had a financial interenst in a measles vaccine which was obviously not going to go anywhere when the MMR vaccine was so successful.
The reason that people started looking at ivermectin was the good old “it works in a petrie dish” effect, which XKCD has punctured in his usual way https://xkcd.com/1217/ The problem in practise is that it’s impossible to get the dose needed for it’s anti-viral properties to work into a whole human.
Shit “work” like this study make me furious, so irresponsible for so many reasons, but science has many of the problems that humans in general have, with not just things like p-hacking to make results look significant, but outright fraud as here. There have been fields that have progressed at the rate of death of various “Grand Old Men” – I am just old enough to remember the end of the furore when continental drift was proposed. I learned about both continental drift and the furore from a BBC Horizon programme, which I saw at home, then was shown by a progressive geology teacher in school, then shown again at university by the geology lecturer there – it was new enough that the TV programme was actually the easiest way for them to show us the idea.because it wasn’t yet in the text books. There are problems with peer review, especially in small fields where anonymity is impossible and rivals are the only people with enough knowledge to review each others papers; with the amount of time available for reviewers to do unpaid work that doesn’t count in any way towards their academic reputation and more. There are problems with publishing papers counting so significantly towards an academic reputation that in too many fields PhD’s are expected to have published several papers by the time they get their doctorate. With there being no reputational or financial reward for replicating experiments. With the hold the big publishers have over the content of the journals they own and the ridiculous prices they charge for subscriptions to those journals. As Marcus has shown many tiems there are huge problems in some subjects that claim to be science, like psychology, but all too often don’t use the most obvious scientific techniques, things like using representative populations in the studies they perform. I could go on, but people in science know about all of these problems, and many are trying to do things that will ameliorate or even in a few cases eliminate the problems – witness that this paper has been outed as fraud. They are also trying to improve the communication of science to the wider public. They are making these efforts because flawed as it is science is still the best way we have of studying how the world works.
Crip Dyke, Right Reverend Feminist FuckToy of Death & Her Handmaiden says
I read the original GRFTR post by Jack Lawrence. It was interesting how cautiously Lawrence worded things compared with how explosive the actual findings were.
John Morales says
As Jazzlet intimated, replication is the part of the method that addresses the featured problem. Or: as cvoinescu noted, a published study is not all there is to it.
Um, the error detection is part of the process of self-correction. It’s both.
Problem is, people jump to unwarranted hasty conclusions, then the belief persists due to various known cognitive biases.
flex says
@Jazzlet,
You may be right, I do tend to consider chicanery last when looking for motivations. That may be a failing on my part.
My experience is that ignorance, incompetence, and ego are usually more likely primary explanations for poor work, leaping to conclusions, and refusing to acknowledge other’s expertise. But fraud as a primary motivation is certainly a possibility.
Reginald Selkirk says
Retracted coronavirus (COVID-19) papers
A distressingly long list. 131 retracted, 12 retracted due to journal error, 4 retracted and reinstated, 7 expressions of concern. 4 with Ivermectin in the title.
Marcus Ranum says
Reginald Selkirk@#12:
Retracted coronavirus (COVID-19) papers
A distressingly long list. 131 retracted, 12 retracted due to journal error, 4 retracted and reinstated, 7 expressions of concern. 4 with Ivermectin in the title.
Oh, that’s bad. A lot of people just had the rug pulled out from under them.
So, the fraudster(s) just bashed the pre-print together by using copy-paste and a robo-thesaurus: basic high school-level cheating tools. And the tables didn’t add up. It doesn’t even sound like a particularly good fake and it has a large footprint. I predict that humans will still be talking about Ivermectin as an anti-viral in 25 years. It’ll just be a few of them, but that’ll be enough.
Marcus Ranum says
flex@#11:
My experience is that ignorance, incompetence, and ego are usually more likely primary explanations for poor work, leaping to conclusions, and refusing to acknowledge other’s expertise. But fraud as a primary motivation is certainly a possibility.
The thing is that fraud wastes everyone’s time: it wastes the fraudster’s to create it, and it wastes everyone who reads the results. Worse, it can waste research-time better spent making the world a safer place, because intelligent well-meaning people have to invest time to dig out the garbage.
I felt that way about computer security, by the end of my career. I spent decades trying to keep things real and sensible, and vendor after vendor kept coming out with bullshit products and marketing them well – and the customers bought the bullshit. Why? Because they figured that if the products didn’t work, experts would say something. Then I looked around and realized that most experts were too concerned with getting stock options and cashing out, and were taking the approach of, “well, I guess that they wouldn’t sell it if they didn’t think it worked…” nobody’s watching the baby. I think there needs to be a public agency that fact-checks and debunks. But I can’t see anyone wanting to put their life’s work into that.
Marcus Ranum says
cvoinescu@#7:
As a budding scientist, I can assure you that almost nobody considers peer review to be the be-all, end-all. We read all papers carefully, with an eye out for bias, for what’s not included in the paper, for self-delusion, for bad statistics, for cherry-picking, for lousy citations, for stretched or overblown conclusions, and so on. Even large collaborations with lots of prestige and funding can put out bad papers. Just because it’s peer-reviewed doesn’t mean it’s true.
That’s good to hear.
But a little voice in the back of my mind is reminding me that, since you read this blog, you’re a self-selected, biased sample for cynicism.
Sunday Afternoon says
Don’t forget that a “pre-print” by definition is BEFORE peer review. Not that peer review (as cvoinescu in #7 said) means that any data and conclusions are correct.
In my experience as a reviewer for low impact journals catering to an obscure branch of applied physics, I can hopefully catch things that are obviously wrong and occasionally things that are subtly wrong. Peer review does not require replicating the results before publication.
Marcus Ranum says
Sunday Afternoon@#16:
Don’t forget that a “pre-print” by definition is BEFORE peer review. Not that peer review (as cvoinescu in #7 said) means that any data and conclusions are correct.
Right. That’s why nobody took it seriously; they all waited for the peer-reviewed version.
They didn’t? Oh, right.
cvoinescu says
Marcus @ #15: But a little voice in the back of my mind is reminding me that, since you read this blog, you’re a self-selected, biased sample for cynicism.
Fair point. But you’d enjoy Journal Club when we talk about a shoddy paper.
jrkrideau says
@ Marcus
The paper was a pre-pub and not peer-reviewed though as cvoinescu @ 7 points out it might not have been caught in a review. I have seen one epidemiologist comment that he had thought it a rather poor paper but had missed just how horrible. Well, complete—and weird—fraud as far as I can see.
BTW, the “data” is a total dog’s breakfast. It seems to have been created by someone who did not know what a spreadsheet is. Nick Brown has a fascinating critique of the data.
flex @#4
jazzlet@ 8
Wakefield was also paid, if I recall correctly, something in the neighborhood of £100,000 by a company of solicitors who needed some evidence about the “dangers” of the MMR vaccine. I think his vaccine may even have been an afterthought.