“A surprising amount of death of small vertebrates in the Amazon is likely due to arthropods such as big spiders and centipedes.”

See? This is why we don’t invite mygalomorphs and scolopendrids to any of our parties. They tend to eviscerate and decapitate and consume innocent visitors and splatter fluids all over everything. The Araneomorphae are much tidier, wrapping up their victims in silk and then neatly puncturing them and sucking out their guts. It’s the difference between an axe murderer and the gangster who lays down plastic before executing his enemy. Who would you rather have at a social event?

I ☠️ flat-earthers

I refuse to watch this new Netflix documentary, Behind the Curve, because even if it is ripping ruthlessly into those idjits, it’s giving them more attention than they deserve. They’re also intellectually dishonest.

One of those Flat Earthers is Bob Knodel, who hosts a YouTube channel entirely dedicated to the theory and who is one of the team relying on a $20,000 laser gyroscope to prove the Earth doesn’t actually rotate.

Except… It does.

“What we found is, when we turned on that gyroscope, we found that we were picking up a drift,” Knodel explains. “A 15-degree per hour drift.

“Now, obviously we were taken aback by that – ‘Wow, that’s kind of a problem.’

“We obviously were not willing to accept that, and so we started looking for easy to disprove it was actually registering the motion of the Earth.”

You know what they say: If your experiment proves you wrong, just disregard the results!

“We don’t want to blow this, you know?” Knodel then says to another Flat Earther. “When you’ve got $20,000 in this freaking gyro.

“If we dumped what we found right now, it would be bad? It would be bad.

“What I just told you was confidential.”

Wow. They spent $20,000 on an instrument that they then chose to ignore. As one of those small college scientists who is trying to patch together gear on little bitty $400, $600 grants, and who bought a microscope camera for $2000 out of his own pocket, I’m more than a little appalled. Next time someone decides to drop a chunk of money on some crackpot, could they just send it to me, instead? I’ll use it responsibly.

If nothing else, that kind of money would fund summer research projects for at least six students, and would help produce competent scientists for the future.

Did you expect brand loyalty from a spider?

No one is surprised, instead all the arachnologists are thinking “Cool!” Scientists in the Amazon captured a video of a mygalomorph spider chowing down on a young mammal. Mygalomorphs (tarantulas, funnel web spiders, trap door spiders) are big arthropods that will kill and eat anything about their size that they can ambush, so they’ll eat other arthropods, small birds, reptiles, and yes, mammals.

You probably shouldn’t eat ET

Charlie Jane Anders says that if we meet intelligent Space Aliens, we’d probably try to eat them (and we shouldn’t). I agree, because people are horrible, but I also think we shouldn’t because at the least we’re going to get Space Diarrhea, but we’re probably just going to get Space Death.

Anyway, I made a video. Unfortunately, I didn’t script it, but just charged off extemporaneously, which means it ended up about an hour long. Sorry. College professor. Wind me up, I talk for an hour about anything.

The summary: earth life maintains mutual compatibility (more or less) because of its common origin, 4 billion years of co-evolution, and because specialization and cooperation maximizes efficient extraction of resources. Aliens have none of that, and are quite likely to have diverged biochemically in ways that are inherently inimical to our biochemistry, and we have not had any opportunity to adapt to their differences. I also suggest that one hypothesis to explain the so-called Fermi Paradox is that habitable worlds evolve such different detailed chemistries that they are basically tainted toxic soups to other species, and that any sensible starfaring species would flag stellar systems with living biospheres with a great big biohazard symbol. Mars-like worlds which lack any native life (presumably) but are terraformable might be the optimal target for human colonization.

Now to print a few of these out and post them around the science building…

It’s hard to recruit students for research projects when you’re off on sabbatical. I’ve got one lined up so far, but I have ambitious plans for the summer and would like to get one or two more, so I’ve put together a recruitment poster.

Maybe I should have used a scientifically accurate close-up of a spider face instead of something bright and cartoony, but I have to get them into the lab first. Then they’ll learn to love our new arachnid reality.

Oooh, I’m a disruptor now!

Cool, I’m on this podcast, The Disruptors. According to the blurb, this is what we talked about.

  • The problem with religion and creationism clouding public discord
  • Why evolution is so important to understand and how conservatives have created fake doubt
  • How embryos evolve and why understanding the stages is actually quite important
  • Why PZ’s more than a little worried about CRISPR and genetic engineering
  • The truth about Gattica and designer babies
  • Why Buddhism’s not much better than other religions in PZ’s book
  • How religion came to be and why we’re still a long way off from eradicating it
  • Why fake news mirrors religious beliefs and is caused by many of the same human flaws
  • What scientists should learn from preachers and priests
  • How to think about education and reforming communitiies
  • Why the world is so divided and what we can do about it
  • The science of gene testing and why we know a lot less than we think we do

I’m old, my memory is going, I don’t remember everything we talked about, but apparently it was everything. You be the judge, let me know what I got wrong.

Also, for those of you into podcasts in general, I’ll be on Philosophers in Space later this week, despite being neither a philosopher nor in outer space. I’ll let you all know when that drops.

I should have cited Ed Yong

I just submitted a proposal on Monday for in-house funding for student research this summer and next year, specifically to assemble a Spider Squad to do a local survey of spider taxa and numbers. I cited the Sanchez-Bayo and Wyckhuys article as evidence that there are grounds for concern about declines in arthropod numbers, and argued that spiders are a good proxy for insect populations, because they’d also give us a perspective on those non-charismatic insects, not just butterflies and bumblebees, that form their food supply.

I just this morning got around to reading Ed Yong’s summary of the Insect Apocalypse, and I agree completely. The review suggests that it’s all bad news, that we should be concerned, and that we should be studying this more thoroughly, but that the panic over insect armageddon is grossly over-inflated. Nothing is going to make insects go extinct, short of a planet-sterilizing impact with a world-killing asteroid.

The Sanchez-Bayo and Wyckhuys review is fine, it’s real data, but it’s not necessarily representative. So what do we need to do? Fund more science!

She and others hope that this newfound attention will finally persuade funding agencies to support the kind of research that has been sorely lacking—systematic, long-term, widespread censuses of all the major insect groups. “Now more than ever, we should be trying to collect baseline data,” Ware says. “That would allow us to see patterns if there really are any, and make better predictions.” Zaspel would also love to see more support for natural-history museums: The specimens pinned within their drawers can provide irreplaceable information about historical populations, but digitizing that information is expensive and laborious.

“We should get serious about figuring out how bad the situation really is,” Trautwein says. “This should be a huge wake-up call, and we should get on the ball instead of quibbling.”

What a coincidence — that’s what I said in my proposal. We need to collect baseline data, which is what I aim to do in this first year. And then, of course (hint, hint) I should get funding to keep collecting data for years. We’ll be covered with spiders!

Here’s how you evaluate the scientific rigor of a field

Warning: it’s boring, tedious, hard work. There’s nothing flashy about it.

First step: define a clear set of tested standards. For clinical trials, there’s something called Consolidated Standards of Reporting Trials (CONSORT) which was established by an international team of statisticians, clinicians, etc., and defines how you should carry out and publish the results of trials. For example, you are supposed to publish pre-specified expected outcomes: “I am testing whether an infusion of mashed spiders will cure all cancers”. When your results are done, you should clearly state how it addresses your hypothesis: “Spider mash failed to have any effect at all on the progression of cancer.” You are also expected to fully report all of your results, including secondary outcomes: “88% of subjects abandoned the trial as soon as they found out what it involved, and 12% vomited up the spider milkshake.” And you don’t get to reframe your hypothesis to put a positive spin on your results: “We have discovered that mashed-up spiders are an excellent purgative.”

It’s all very sensible stuff. If everyone did this, it would reduce the frequency of p-hacking and poor statistical validity of trial results. The catch is that if everyone did this, it would be harder to massage your data to extract a publishable result, because journals tend not to favor papers that say, “This protocol doesn’t work”.

So Ben Goldacre and others dug into this to see how well journals which had publicly accepted the CONSORT standards were enforcing those standards. Read the methods and you’ll see this was a thankless, dreary task in which a team met to go over published papers with a fine-toothed comb, comparing pre-specified expectations with published results, re-analyzing data, going over a checklist for every paper, and composing a summary of violations of the standard. They then sent off correction letters to the journals that published papers that didn’t meet the CONSORT standard, and measured their response.

I have to mention this here because this is the kind of hard, dirty work that needs to be done to maintain rigor in an important field (these are often tests of medicines you may rely on to save your life), and it isn’t the kind of splashy stuff that will get you noticed in Quillette or Slate. It should be noticed, because the results were disappointing.

Results
Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25–96%), secondary outcomes (mean 55%, range 31–72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9–8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67–100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0–100%). Where letters were published, there were delays (median 99 days, range 0–257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0–86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines.

Conclusions
All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals’ willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT’s mechanisms for enforcement, and novel strategies for research on methods and reporting.

People. You’ve got a clear set of standards for proper statistical analysis. You’ve got a million dollars from NIH for a trial. You should at least sit down and study the appropriate methodology for analyzing your results and make sure you follow them. This sounds like an important ethical obligation to me.