Charles Murray is still an ignoramus

I’ve been telling you, Charles Murray is an ignorant hack. I can’t stand listening to this know-nothing pontificating on genetics when he’s so full of shit on the topic, which doesn’t stop him from being arrogantly confident about it.

Anyway, here’s a good critique of The Bell Curve — it’s hard to believe we still have to argue about it.

Understandably, these arguments provoked the ire of progressively minded scientists and commentators. However, the sweeping and reflexive manner in which opponents of the hereditarian arguments advanced their objections to The Bell Curve often led these critics to adopt counterproductive conclusions. Unhelpfully, they conflated two distinct issues. The first is the question of what it means to claim that something is genetic, and the second is the inevitability of certain life outcomes based on the biology of a particular organism.

Properly speaking, genetics concerns some characteristic of an organism varying across individuals in a group in a given context. It is, by definition, not an explanation of the behavior or development of a given individual in a given instance. Conflating the issue of the causes of differences with that of the inevitability of the development of a particular organism is an important part of the hereditarian rhetorical strategy deployed by the likes of Herrnstein and Murray. To the extent that their arguments have managed to gain some traction in the world, it has been because they have managed to convince their critics to commit the error for them.

Whoa there — the heart of my criticisms of Murray has always been that genetics is not as determinative as the naive people who learned about Punnett squares in fourth grade think. But do go on, this is an important definitional issue and bears repeating.

But the confusion in Murray and Herrnstein’s thinking doesn’t just stop at their pessimism about the kind of practical responses to differences purportedly caused by genetics — it goes all the way down to their understanding of what genetics is. Let’s start by clarifying what we mean by “genetic” and outline why that which is genetic is not necessarily inevitable. For one, genetics deals with groups of organisms rather than the life outcomes of individual organisms. All organisms have genes, but it takes groups of organisms to have genetics because genetics is ultimately about how variation is structured within a group.

Take a single tomato plant in isolation. It has a genome that is between one-fourth and one-third the size of that of a human’s in terms of the raw amount of DNA. Inside its genome are a few tens of thousands of genes, which, in this case, are stretches of the genome that form a chemical template for the cellular production of the proteins and other biochemicals that are vital for the structure and function of organisms. However, since we are dealing with a single plant at a single point in time, there is no comparative context that would allow us to identify the differences among organisms that characterize the rich diversity of life.

Exactly! This is also why it’s important that students actually do real crosses with real organisms. The abstractions of theory might tell you that oh, one quarter of the progeny will have a particular phenotype, but when you sit down and have to closely examine a thousand flies, you get to see all the variation you did not predict and you learn that it’s never as simple as the models tell you it is. The variation is also interesting.

But yes, genetics is fundamentally probabilistic. You can’t use it to predict individual destiny. It’s also the case that genetics has significant interplay with the environment.

But even having many organisms to compare is not sufficient for a biological system to display genetics in the proper sense of the term. Genetics in the sense that matters is ultimately about variation that arises from genetic differences. To see this, think again about tomatoes. They can be cloned with ease by taking cuttings from a single plant and growing them in their own allotments of soil. Genetically, the different newly individualized plants will be identical to one another, with the exception of a very few mutations — spontaneous changes to DNA that can occur during cellular replication.

If we were to compare a large number of these cloned tomato plants, we would find many differences among them. The shape and sizes of leaves would differ, as would the coloration of the fruits and the pattern of branching along the stalks. Since, on account of being clones, the plants are all genetically identical, these differences could not be attributable to genetics. While each of the plants has genes and we have a group of plants to form the basis for comparison needed to establish that there is variation, there are no genetic differences among the plants that could account for any of that variation. That is, while our tomato plants have genes, they display no genetic differences among one another despite having physiological differences.

Yes, that’s always been obvious if you actually look at populations. I had tanks of zebrafish that were about as genetically uniform as you can get, highly inbred for over a century — yet I could recognize individuals in a tank and see differences in behavior. I’ve only been inbreeding spiders for a half dozen generations, but I don’t see variation diminishing, at least not yet.

How do people take Murray seriously when his fundamental understanding of biology is so wrong?

The horrible ghouls of 20th century anthropology

This is a photo of Mary Sara, an 18 year old Sami woman who traveled to Seattle in 1933, accompanying her mother, who was getting cataract surgery. While there, Mary died of tuberculosis, which is tragedy enough…but then the ghouls arrived.

The Smithsonian wanted her brain.

The young woman — whose family was Sami, or indigenous to areas that include northern Scandinavia — had traveled with her mother by ship from her Alaska hometown at the invitation of physician Charles Firestone, who had offered to treat the older woman for cataracts. Now, Firestone sought to take advantage of Sara’s death for a “racial brain collection” at the Smithsonian Institution. He contacted a museum official in May 1933 by telegram.

Ales Hrdlicka, the 64-year-old curator of the division of physical anthropology at the Smithsonian’s U.S. National Museum, was interested in Sara’s brain for his collection. But only if she was “full-blood,” he noted, using a racist term to question whether her parents were both Sami.

The Smithsonian houses over 30,000 body parts, including hundreds of brains.

Nearly 100 years later, Sara’s brain is still housed by the institution, wrapped in muslin and immersed in preservatives in a large metal container. It is stored in a museum facility in Maryland with 254 other brains, amassed mostly in the first half of the 20th century. Almost all of them were gathered at the behest of Hrdlicka, a prominent anthropologist who believed that White people were superior and collected body parts to further now-debunked theories about anatomical differences between races.

There they sit to this day, gathered by Ales Hrdlicka, who somehow became a curator at the Smithsonian and a prominent defender of racist pseudoscience.

Hrdlicka, who was born in what is now the Czech Republic, received medical training from the Eclectic Medical College of New York City and the New York Homeopathic Medical College in Manhattan before moving into the field of anthropology. He was seen as one of the country’s foremost authorities on race, sought by the government and members of the public to prove that people’s race determined physical characteristics and intelligence.

He was also a longtime member of the American Eugenics Society, an organization dedicated to racist practices designed to control human populations and “improve” the genetic pool, baseless theories that would be widely condemned after the Nazis used them to justify genocide and forced sterilization during the Holocaust. In speeches and personal correspondence, he spoke openly about his belief in the superiority of White people, once lamenting that Black people were “the real problem before the American people.”

“There are differences of importance between the brains of the negro and European, to the general disadvantage of the former,” he wrote in a 1926 letter to a University of Vermont professor. “Brains of individual negroes may come up to or near the standard of some individual whites; but such primitive brains as found in some negroes … would be hard to duplicate in normal whites.”

That is truly remarkable. The article focuses on the abuse of autonomy of so many people who had their brains scooped out and sent off to Washington DC, and that is definitely an important issue. But I was reading it and asking myself, “What science was done? What did we learn? What did he discover to justify calling the brains of black people ‘primitive’?” It turns out to have been nothing.

The extent of Hrdlicka’s own research on the brains is unclear. When a professor wrote to him and asked about the differences he found between the brains of people of different races, he replied that research studies showed the superiority of White brains, without citing any studies of his own. He published a 1906 study on brain preservatives, recording the weight of human and animal brains and comparing how they fared in a chemical solution. But The Post found no other research on the brains by Hrdlicka.

I know a bit about neuroscience, and I find the whole approach unproductive and baffling. Sure, you could do crude measurements, weighing brains and slicing them open to measure the gross morphology of regions and nuclei…but we know that all of that is so variable and often so irrelevant to the functioning of those brains that you can’t learn anything from it. We simply don’t know enough about the details of how brains work that, aside from major abnormalities, you can’t conclude anything about the minds housed in those lumps of meat by hacking them up, and you especially couldn’t do anything with the information in the early 20th century.

Basically, Hrdlicka was nothing but an obsessive and rather morbid collector. His ‘credibility’, what there was of it, rested entirely on accumulating the largest collection of brains, like bloody tragic Pokemon. He didn’t have to think. He didn’t have to study. He was just gathering gory fragments of human beings and parading them before other bad scientists who thought this was an accomplishment. The Smithsonian should be ashamed, we should all be ashamed that this charade of race ‘science’ was perpetrated for so long, and that people continue to think this was a useful approach to justify their bigotry to this day.

Hrdlicka really was a ghoul. When an exhibit of Filipino culture, represented by a large number of people from that country, was held at the World’s Fair, he had one thing on his mind: “That summer, Hrdlicka headed to St. Louis, hoping to take brains from Filipinos who died.” He collected four brains from the unfortunate people who died incidentally there (tuberculosis and pneumonia were Hrdlicka’s friends).

Ugh. The Smithsonian, and other museums around the country, need to address this ugly stain on their history, and make amends to the people they exploited for such stupid ends.

Science relies on honest observation

Elisabeth Bik is getting mad. She has spent the better part of a decade finding examples of scientific fraud, and it seems to be easy pickings.

Although this was eight years ago, I distinctly recall how angry it made me. This was cheating, pure and simple. By editing an image to produce a desired result, a scientist can manufacture proof for a favored hypothesis, or create a signal out of noise. Scientists must rely on and build on one another’s work. Cheating is a transgression against everything that science should be. If scientific papers contain errors or — much worse — fraudulent data and fabricated imagery, other researchers are likely to waste time and grant money chasing theories based on made-up results…..

But were those duplicated images just an isolated case? With little clue about how big this would get, I began searching for suspicious figures in biomedical journals…. By day I went to my job in a lab at Stanford University, but I was soon spending every evening and most weekends looking for suspicious images. In 2016, I published an analysis of 20,621 peer-reviewed papers, discovering problematic images in no fewer than one in 25. Half of these appeared to have been manipulated deliberately — rotated, flipped, stretched or otherwise photoshopped. With a sense of unease about how much bad science might be in journals, I quit my full-time job in 2019 so that I could devote myself to finding and reporting more cases of scientific fraud.

Using my pattern-matching eyes and lots of caffeine, I have analyzed more than 100,000 papers since 2014 and found apparent image duplication in 4,800 and similar evidence of error, cheating or other ethical problems in an additional 1,700. I’ve reported 2,500 of these to their journals’ editors and — after learning the hard way that journals often do not respond to these cases — posted many of those papers along with 3,500 more to PubPeer, a website where scientific literature is discussed in public….

Unfortunately, many scientific journals and academic institutions are slow to respond to evidence of image manipulation — if they take action at all. So far, my work has resulted in 956 corrections and 923 retractions, but a majority of the papers I have reported to the journals remain unaddressed.

I’ve seen some of the fraud reports, and it amazes me how stupid the scientists committing these fakes must be. It’s as if they think jpeg artifacts don’t exist, and can be an obvious fingerprint when chunks of an image are duplicated; they don’t realize that you can reveal cheating by just tweaking a LUT and seeing all the duplicated edges light up. The only reason it’s done is to adjust your data to make it look like you expected it to look, which is an obvious act against the most basic scientific principles: you’re supposed to use science to avoid fooling yourself, not to make it easy to fool others.

This behavior ought to be harshly punished. If image fakery became in issue when one of my peers came up for tenure or promotion, I’d reject them without hesitation. It’s not even a question: this behavior is a deep violation of scientific and ethical principles, and would make all of their work untrustworthy.

Also, this is a problem with the for-profit journal publication system. Those scientists paid money for those pages, how can we possibly enforce honesty? The bad actors wouldn’t pay us for journal articles anymore!

But guess what happens when Elisabeth Bik takes a principled stand?

Most of my fellow detectives remain anonymous, operating under pseudonyms such as Smut Clyde or Cheshire. Criticizing other scientists’ work is often not well received, and concerns about negative career consequences can prevent scientists from speaking out. Image problems I have reported under my full name have resulted in hateful messages, angry videos on social media sites and two lawsuit threats….

Things could be about to get even worse. Artificial intelligence might help detect duplicated data in research, but it can also be used to generate fake data. It is easy nowadays to produce fabricated photos or videos of events that never happened, and A.I.-generated images might have already started to poison the scientific literature. As A.I. technology develops, it will become significantly harder to distinguish fake from real.

Science needs to get serious about research fraud.

How about instantly firing people who do this? Our tenure contracts generally have a moral turpitude clause, you know. This counts.

I always wondered how you can be a university president & on the board of pharmaceutical companies & run a gigantic research lab

I know that guy! That’s Marc Tessier-Lavigne! He’s about my age, and we shared similar interests — we were both interested in axon guidance, and I followed his work avidly some years ago. He was publishing about netrins, signaling molecules that affect the trajectory of growing neurons, while I was studying growing neurons in grasshopper embryos. I met him several times, I attended talks he gave at various meetings, it was hard to avoid Tessier-Lavigne.

Our careers followed very different paths, though. I ended up teaching at a small liberal arts college, while he got a position at UCSF, and then was CSO at Genentech, and then was president of Rockefeller University, was on the boards of various pharmaceutical companies, and finally was president of Stanford University. He was a major go-getter, running gigantic factory-style labs, getting regularly published in Science and Nature and Cell. It was a life that looked horrible to me, just as my life of obscurity and teaching would have looked horrible to him, if ever he had deigned to notice me.

Why would I have disliked the prestigious path he took in science? Because he turned himself into a manager, a guy who was disconnected from the science that was being done in his massively well-funded labs. Ick. I’d rather play at the bench and help students get enthusiastic about doing science.

I may have chosen wisely, because now Tessier-Lavigne has been compelled to resign as an investigation found evidence of fraud in his work. Yikes. This is bad.

The Board of Trustees’ inquiry stopped short of accusing Tessier-Lavigne — who has been Stanford’s president since 2016 — of fraud, saying there’s no evidence he “personally engaged in research misconduct.”

However, it was concluded that five papers on which Tessier-Lavigne was a principal author included work from “some members of labs overseen by Dr. Tessier-Lavigne” who had “either engaged in inappropriate manipulation of research data or engaged in deficient scientific practices, resulting in significant flaws in those papers.”

When the issues emerged, “Tessier-Lavigne took insufficient steps to correct mistakes in the scientific record,” the board’s report said.

This is what happens when you become an over-worked administrator with your fingers in too many pies. That does not excuse him — he has his name on so many papers, and getting an authorship entails significant responsibilities — and it just tells you the kind of peril ambition can put you in.

I’ve been teaching about netrins and robo and neuropilins and all these molecules in neurodevelopment for years. Am I going to have to put an asterisk by the source papers and review their validity now? I’m hoping the descent into sloppiness was a late-career problem that doesn’t call into question all the fundamental stuff he did.

Progress in embryo analysis!

Our new development in spider development is pretty basic stuff. We’re dechorionating embryos! That is, stripping off a thin membrane surrounding the embryo, so we can do staining and fixation and various other things. It’s a standard invertebrate technique — it turns out you can remove it by just washing them in bleach. Look, it works! This is a Parasteatoda embryo.

We’re still tinkering with the timing of the treatment. Five minutes is way too long, which basically dissolved the whole embryo. All it takes is a brief wash to break the chorion down. We’re also working out methods for manipulating them — they’re tiny! Just pipetting them into a solution is a great way to lose them. We’re now using a cut off microfuge tube to make a cylinder that we cap with a sheet of fine nylon mesh, to lower them into the solution. Of course, then we have to separate the embryos from the mesh. Fortunately, we opened up one egg sac and 140 embryos rolled out, so we have lots of material to experiment on.

The next question is whether they survive our abuse. We’ve got some of them sitting under a microscope, time-lapsing their response. We’ll see if they grow…or die and fall apart.

Being a good scientist might be harder than you think

You know, this guy was a terrible scientist by most criteria

The ideas in this paper, Ten simple rules for socially responsible science, ought to be explicitly spelled out in any grad program, especially since many of the incentives in science careers tend to oppose their rules. Read the whole thing, but here are a few of my comments on their list.

Rule 1: Get diverse perspectives early on

Some people seem to believe in the myth of the lone genius who comes up with brilliant ideas and executes them…and then gets a Nobel prize. It doesn’t work that way. Ever. It’s totally collaborative. In my classes I literally force students to work in teams in the lab, and there are always a few students who insist on going it alone. That’s missing the point!

Rule 2: Understand the limits of your design with regard to your claims

It’s tempting to go too far and make extravagant justifications for your work. Studying spiders will lead to a cure for cancer! Not really, but it would be a big boost to getting grant money if it were true.

Rule 3: Incorporate underlying social theory and historical contexts

I’ve experienced this unfortunate attitude that the only work that matters is stuff that’s been published in the last five years. I’ve had students ask me if it was OK to cite a paper from 1991 in their thesis project. Yeah? Why not? I cited papers from the 19th century in my PhD thesis! Dig deep, go interdisciplinary, drink from the Pierian spring, it’ll make your work better.

Rule 4: Be transparent about your hypothesis and analyses

Obviously. An experiment is not a fishing expedition.

Rule 5: Report your results and limitations accurately and transparently

Uh-oh. It’s shocking that we have to spell that out.

Rule 6: Choose your terminology carefully

This is about jargon. I’ve written a few things where I’ve totally lost people because they don’t know what I’m talking about. It’s also very common for me to make lots of comments in first drafts of student papers that they need to spell out that acronym and need to explain their terminology.

Rule 7: Seek a rigorous review and editorial processes

It’s common to see resentment at reviewer comments, and sometimes they are wrong…but you have to try and see it as a process to improve your work. That’s hard, though, especially if you’ve got a job that only cares about the volume of papers pumped out. Administrators do not read your work for quality.

Rule 8: Play an active role in ensuring correct interpretations of your results

That’s a good idea. Science isn’t fire-and-forget, a paper is a long-term commitment to a set of ideas that may need defending. Also, to be honest, few people will actually read your paper — your bigger audience is the people who come to your public talks or hear your interview on NPR or read the blog post summarizing it.

Rule 9: Address criticism from peers and the general public with respect

Awww, do we have to? Yes. That “peer” specifier is critical, though: I’m not going to treat creationists, anti-vaxxers, or climate change deniers kindly.

Rule 10: When all else fails, consider submitting a correction or a self-retraction

You’d have to do that less often if you heed #1, #5, #7, and #8, especially #7.

Most of the web advice I see about how to be a good scientist involves basic personal attributes: curiousity, observational skills, quantitative measurements, etc., and all that is true, but you don’t see much about all the essential aspects of being a cooperative community member. Maybe if we spent more time on that in early education we’d have fewer sociopaths.

Nah, there’s no cure.

Variation is wonderful

I’m stealing a fascinating thread on Twitter from Kathleen DePlume. In some ways, it’s unsurprising: if you compound the natural variation in enough parameters, you’ll discover that everyone is unique. It’s a question of including broad tolerances, and the real question is…how broad do they have to be to accommodate 99% of humanity? And another question would be…don’t the remaining 1% deserve a place as well? The math is nifty but it isn’t the whole of human reality.

So, did you ever wonder why car seats and seatbelts are so wonderfully adjustable? It all goes back to cockpit manufacture.

The USAF wanted to make aircraft with seats and belts fitted to the “normal” airman; the tolerances weren’t too wide, but lots of fellas are normal, right?

Wrong.

As it turned out, hilariously wrong.

You see, they measured several thousand enlisted men (just men – these were the dark times before women were people) on just a few things.

Leg length, knee to ankle, hip to knee, various seat measurements. Seating height to shoulder.

Shoulder width. Arm length. Shoulder to elbow, elbow to wrist.

You get the point.

Measurements that would allow the cockpit and belts to be correct and safe, as long as they were “close enough” to the normal specifications.

So, after taking these measurements – a great undertaking, the measures got so good at it that they could do all 38* measurements in under 2 minutes – they analysed the data.

*I might be misremembering the exact number

They figured if every measurement had tolerances that fit 30% or so “normal” men, then they’d lose a few percent to the abnormally shaped weirdos (you know the ones – people whose arms are way longer than their height, or who have tiny hands compared to their feet?) they’d still fit at least 20% of their potential pilots into the custom measured Everyman cockpits, right?

Wrong.

So, so very wrong.

How many pilots do you think fit in the normal measurements on all 38 metrics?

Go on, take a guess. I’ll wait.

Actually, no I won’t, because I’m writing this as a thread.

Zero. The answer is zero.

Not a single soldier was within tolerances on all measurements.

Out of thousands and thousands of airmen measured, every last man was abnormal on at least one.

It turns out that while yes, arm length and leg length aren’t exactly independent (if you’re tall you probably have long arms AND long legs), their r-value isn’t anything like high enough for the purposes the Air Force had in mind. They’re probably long by different amounts.

So it isn’t as simple as going 0.3^38 (a number so small it should be obvious it’ll round to 0), it also wasn’t what they assumed (0.3x [almost 1]^37).

It was somewhere in between.

Okay, so where did that leave them?

It left them knowing with utter certainty that they could not design a static cockpit and recruit airmen to fit it.

They had to go the other way. Broaden the tolerances – make it so they could account for broad differences in measurements.

They had to invent adjustable seats. Adjustable straps for the safety harnesses, seats that could travel back and forth a little bit, that sort of thing.

Okay, but how does this relate to cars?

Well, there’s the obvious: once it’s been invented, why not use it in cars? But the older folk among us probably remember bench seats, and maybe even a time when you didn’t put your seatbelt on because you were insulting the driver if you did.

What changed?

Funnily enough, another clever statistician.

This one was tasked with keeping very expensive pilots alive after the Air Force had spent so much money training them up. He was supposed to be looking at the safety equipment within planes, but this was after the war, so…

…pilots weren’t actually dying in the air that much.

Mostly what killed dashing young men back in those days was car crashes.

So the statistician came back with the findings that pilots would live longer if they were forced to wear their damned seatbelts when driving.

Funnily enough, this was a huge part of the impetus to make it law that all passengers have to wear belts in cars.

It’s only sensible – but humans seldom do sensible things unless forced. And pilots are very much human.

So we all wear seatbelts now because pilots are expensive.

The moral of all this?

Mostly that maths is interesting; but also that if someone is jumping up and down demanding their right to call themself “normal”, they are full of sh*t and don’t know what they’re talking about.

Mathematically speaking.

Unfortunately, the thread lacks any mention of sources. I’d want to know a lot more about it before I could cite it as interesting history without any caveats.

Wisconsin isn’t a popular site for reality

I wish we could vaccinate some people with a heavy dose of reality.

The state next to mine, Wisconsin, has gone insane. The raging anti-vax hysteria

The Republican-controlled Wisconsin Legislature on Wednesday voted to stop Democratic Gov. Tony Evers’ administration from requiring seventh graders to be vaccinated against meningitis.

The state Senate and Assembly, with all Republicans in support and Democrats against, voted to block the proposal. There is no current meningitis vaccination requirement for Wisconsin students.

The Legislature’s vote also makes it easier for parents to get an exemption from a chicken pox vaccine requirement that is in place for all K-6 students. Evers’ administration wanted to require parents seeking a chicken pox vaccination exemption to provide proof that their child has previously been infected.

WHY? These are well-established, safe vaccines against terrible diseases. They work. But somehow, Republicans have got it in their heads that reasonable evidence-based medicine is bad.

This is getting personal, too. My daughter and son-in-law and my little 4 year old granddaughter all live in Wisconsin, and she’ll be attending Wisconsin public schools in a year. I don’t want her to get chicken pox or meningitis. Of course, I trust her parents to get her vaccinated even in the absence of a public health requirement — it’s all the other kids we have to worry about.

The incentives are all wrong

Meat “scientist”

There are scientists I respect, and there are scientists I do not. José Manuel Lorenzo is in the latter category, although I’m sure he wouldn’t care. He’s rolling in the money and the false coin of scientific “prestige”.

Meat expert José Manuel Lorenzo, 46, is the researcher who has published the most scientific studies in Spain. He put his name on 176 papers last year, according to a count by John Ioannidis — an expert in biomedical statistics at Stanford University — which was requested by EL PAÍS.

Lorenzo publishes a study every other day (if you include weekends). It’s an astonishing figure, far above the second-highest ranked scientist: the prestigious ecologist Josep Peñuelas, 65, who published 112 studies in 2022

I’m trying to picture the logistics of all that. It typically takes a month or more to get a paper published, and that’s if there are no revisions or rejections. I’ve heard of high priority, dramatic results getting a turnaround of a week or so — maybe trash papers that no one cares about similarly get rapid publication. At any rate, it must mean he’s got dozens of papers stacked up in a queue at any one time. How does he find time to cope with revisions, let alone actually write them? Forget about actual research. The “evidence” backing up the claims that warrant a publication would have to be done in a day or two!

Oh wait, there is a way. Don’t do the research, don’t do the writing, and don’t even read the papers.

José Manuel Lorenzo is the head of research at the Meat Technology Center (CTC) — an entity dedicated to meat products, supported by the regional government of Galicia — in San Cibrao das Viñas, a city in the Spanish province of Ourense. A person who has worked with him recalls that, around 2018, his laboratory became “a sausage factory.” Lorenzo went from publishing less than 20 studies a year to signing his name to more than 120. “He doesn’t even have time to read them,” says another person, who has collaborated on projects with the man.

At one point, Lorenzo began collaborating with exotic researchers — who nobody knew about — on topics that have nothing to do with meat. Four months ago, he published a study on the hospital management of monkeypox, alongside Iraqi, Indian and Pakistani co-authors. And a year ago, he and some researchers from India and Saudi Arabia published an article on the treatment of gum disease with bee venom. In a telephone conversation with EL PAÍS, Lorenzo admits that he doesn’t know any of these co-authors in person, nor is he an expert on any of these issues.

That’s a serious lack of integrity he is admitting to. I was trained to understand that if your name was on a paper, you were expected to have contributed significantly to the work, and are familiar with the entirety of the procedures and results. You are responsible for the content of the paper. You can be held accountable for any errors, or worse, any fraud. It’s supposed to be a weighty thing…but not to Lorenzo.

One tool that allows this to go on is the existence of paper mills.

India is one of the countries where so-called “paper mills” are concentrated — factories that churn out scientific studies which are already written and ready to be published in specialized journals. Co-authorship is offered in exchange for money. EL PAÍS requested price rates from one of the Indian companies that sends their offers to Spanish scientists: iTrilon, based in Chennai. The company’s scientific director Sarath Ranganathan offered the possibility of being the first author of a study that was already written — entitled Next-generation neurotherapies against Alzheimer’s — in exchange for about $500. It’s also possible to be the fifth co-author of an article titled Emergence of rare microbial infections in India for $430. iTrilon promises to publish these ready-made studies in the journals of the world’s leading scientific publishers: Elsevier, Taylor & Francis, Springer Nature, Science and Wiley. Last year, the academic publishing industry acknowledged that at least 2% of studies each journal receives are considered to be suspicious. Sometimes, the number of suspicious studies is marked as high as 46%.

Another factor is that grant review and institutional committees are far too willing to do little oversight and superficial evaluation. The problem is that we assess scientific work based on publication, which is already poisoned by capitalism and exploitation, and not by being read.

Although, I must admit, I can understand how someone might be tempted by $400 or $500 flowing into one’s bank account every two days just for rubberstamping a stupid paper.