Evolution: The Story of Life on Earth

Have you got kids? Are you tangentially related to any young people? Are you young yourself? Do you know anyone who just likes a good story and interesting science?

Well, then, I’m sorry, but reading this article will cost you $12.89. Jay Hosler has a new book out (illustrated by Kevin Cannon and Zander Cannon), Evolution: The Story of Life on Earth(amzn/b&n/abe/pwll), and I’m afraid it’s going to be required reading for everyone, and you’re also all probably going to end up buying multiple copies for gifts.

Really, it’s that good. It’s a comic book about aliens from Glargalia explaining the history of life on earth to young Prince Floorsh by going over the fundamental concepts and hitting a few of the details. It’s entertaining and fun, and sneakily informative.

If you don’t simply trust me, check out the extensive excerpts at the NCSE and at Scientific American.

Hey, and if you don’t like comic books, don’t know any young people, and don’t want to read it yourself, buy a copy anyway and give it to your local library. For America.

The Annals of Thoracic Surgery has its own notions of transparency

I don’t think journal editor L. Henry Edmunds is quite clear on how the scientific method should work: we’re supposed to have the free exchange of information. His journal recently retracted a paper (from other sources, it was apparently because the authors, um, “recycled” data from another study), and when asked why, his answer was “It’s none of your damned business”, ranted a bit against “journalists and bloggists”, and then made an interesting comparison: “If you get divorced from your wife, the public doesn’t need to know the details.”.

Hmmm. Except that details of your relationship with your wife aren’t part of your professional interactions with colleagues, aren’t usually presented as data in papers and talks, aren’t part of an institution of collaboration and research that relies on those details, and your relationship isn’t going to someday maybe crack open my chest or the chests of thousands of other people, who are going to depend on the information about your divorce to improve the quality and duration of their lives.

Optogenetics!

The journal Nature has selected optogenetics as its “Method of the Year”, and it certainly is cool. But what really impressed me is this video, which explains the technique. It doesn’t talk down to the viewer, it doesn’t overhype, it doesn’t rely on telling you how it will cure cancer (it doesn’t), it just explains and shows how you can use light pulses to trigger changes in electrical activity in cells. Well done!

Science is not dead

People keep sending me this link to an article by Jonah Lehrer in the New Yorker: The Decline Effect and the Scientific Method, which has the subheadings of “The Truth Wears Off” and “Is there something wrong with the scientific method?” Some of my correspondents sound rather distraught, like they’re concerned that science is breaking down and collapsing; a few, creationists mainly, are crowing over it and telling me they knew we couldn’t know anything all along (but then, how did they know…no, let’s not dive down that rabbit hole).

I read it. I was unimpressed with the overselling of the flaws in the science, but actually quite impressed with the article as an example of psychological manipulation.

The problem described is straightforward: many statistical results from scientific studies that showed great significance early in the analysis are less and less robust in later studies. For instance, a pharmaceutical company may release a new drug with great fanfare that showed extremely promising results in clinical trials, and then later, when numbers from its use in the general public trickle back, shows much smaller effects. Or a scientific observation of mate choice in swallows may first show a clear preference for symmetry, but as time passes and more species are examined or the same species is re-examined, the effect seems to fade.

This isn’t surprising at all. It’s what we expect, and there are many very good reasons for the shift.

  • Regression to the mean: As the number of data points increases, we expect the average values to regress to the true mean…and since often the initial work is done on the basis of promising early results, we expect more data to even out a fortuitously significant early outcome.

  • The file drawer effect: Results that are not significant are hard to publish, and end up stashed away in a cabinet. However, as a result becomes established, contrary results become more interesting and publishable.

  • Investigator bias: It’s difficult to maintain scientific dispassion. We’d all love to see our hypotheses validated, so we tend to consciously or unconsciously select reseults that favor our views.

  • Commercial bias: Drug companies want to make money. They can make money off a placebo if there is some statistical support for it; there is certainly a bias towards exploiting statistical outliers for profit.

  • Population variance: Success in a well-defined subset of the population may lead to a bit of creep: if the drug helps this group with well-defined symptoms, maybe we should try it on this other group with marginal symptoms. And it doesn’t…but those numbers will still be used in estimating its overall efficacy.

  • Simple chance: This is a hard one to get across to people, I’ve found. But if something is significant at the p=0.05 level, that still means that 1 in 20 experiments with a completely useless drug will still exhibit a significant effect.

  • Statistical fishing: I hate this one, and I see it all the time. The planned experiment revealed no significant results, so the data is pored over and any significant correlation is seized upon and published as if it was intended. See previous explanation. If the data set is complex enough, you’ll always find a correlation somewhere, purely by chance.

Here’s the thing about Lehrer’s article: he’s a smart guy, he knows this stuff. He touches on every single one of these explanations, and then some. In fact, the structure of the article is that it is a whole series of explanations of those sorts. Here’s phenomenon 1, and here’s explanation 1 for that result. But here’s phenomenon 2, and explanation 1 doesn’t work…but here’s explanation 2. But now look at phenomenon 3! Explanation 2 doesn’t fit! Oh, but here’s explanation 3. And on and on. It’s all right there, and Lehrer has explained it.

But that’s where the psychological dimension comes into play. Look at the loaded language in the article: scientists are “disturbed,” “depressed,” and “troubled.” The issues are presented as a crisis for all of science; the titles (which I hope were picked by an editor, not Lehrer) emphasize that science isn’t working, when nothing in the article backs that up. The conclusion goes from a reasonable suggestion to complete bullshit.

Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.

I’ve highlighted the part that is true. Yes, science is hard. Especially when you are dealing with extremely complex phenomena with multiple variables, it can be extremely difficult to demonstrate the validity of a hypothesis (I detest the word “prove” in science, which we don’t do, and we know it; Lehrer should, too). What the decline effect demonstrates, when it occurs, is that just maybe the original hypothesis was wrong. This shouldn’t be disturbing, depressing, or troubling at all, except, as we see in his article, when we have scientists who have an emotional or profit-making attachment to an idea.

That’s all this fuss is really saying. Sometimes hypotheses are shown to be wrong, and sometimes if the support for the hypothesis is built on weak evidence or a highly derived interpretation of a complex data set, it may take a long time for the correct answer to emerge. So? This is not a failure of science, unless you’re somehow expecting instant gratification on everything, or confirmation of every cherished idea.

But those last few sentences, where Lehrer dribbles off into a delusion of subjectivity and essentially throws up his hands and surrenders himself to ignorance, is unjustifiable. Early in any scientific career, one should learn a couple of general rules: science is never about absolute certainty, and the absence of black & white binary results is not evidence against it; you don’t get to choose what you want to believe, but instead only accept provisionally a result; and when you’ve got a positive result, the proper response is not to claim that you’ve proved something, but instead to focus more tightly, scrutinize more strictly, and test, test, test ever more deeply. It’s unfortunate that Lehrer has tainted his story with all that unwarranted breast-beating, because as a summary of why science can be hard to do, and of the institutional flaws in doing science, it’s quite good.

But science works. That’s all that counts. One could whine that we still haven’t “proven” cell theory, but who cares? Cell and molecular biologists have found it a sufficiently robust platform to dive ever deeper into how life works, constantly pushing the boundaries of uncertainty.

8-10 year old children can be trained to solve scientific puzzles

It really isn’t that hard to learn to think scientifically — kids can do it. In a beautiful example of communicating science by doing it, students at Blackawton Primary School designed and executed an experiment in vision and learning by bees, and got it published in Biology Letters, which is making the paper available for free. It’s nicely done, an exercise in training bees to use color or spatial cues to find sugar water, and you can actually see how the kids were thinking, devising new tests to determine which of those two cues the animals were using. They were also quite good at looking at the data from different perspectives, recognizing an aggregate result but also noting that individual bees seemed to be using different algorithms to find the sugar water.

The kids also wrote the paper, sorta. They gathered them together in a pub (ah, Britain!) and had them explain what was going on, while one of the adult coauthors organized the text from their words. The experiment itself isn’t that dramatic, but it’s very cool to see the way the students’ brains are operating to understand the result…so really, the experiment was one of seeing how 8 year old children can process the world scientifically. It’s an awesome piece of work.

You know what we need now? A professional journal of grade school science (down, Elsevier, down — we don’t want you involved) that can get a network of schools and science teachers involved in putting more of these efforts together. Role models are important, and kids seeing that other kids are doing real science would be an incredibly powerful tool for bringing up a new generation of scientists.

Another charming part of this story is that a gang of grade school kids have done something grown-up creationists haven’t: they’ve done good science and gotten it published.

Quantum atheists!

On Atheist Talk radio on Sunday morning at 9am Central time, James Kakalios will be joining the gang at Minnesota Atheists to talk about his new book, The Amazing Story of Quantum Mechanics: A Math-Free Exploration of the Science that Made Our World(amzn/b&n/abe/pwll). It should be very entertaining. The book looks good, although I’ve only had a chance to flip through it so far…but it’s right here by my side at the computer desk, and it’s on my short list of books to get read over break.

We also have some videos by Jim Kakalios, and here’s one…although I hesitate to put it here. The first time I put up one of his videos, the comments blew up and it turned into the precursor of today’s endless thread. Let’s not do that again, OK?

How hard is that SF?

I got a request to collect participants for an online survey on science fiction — take a look and help out if you want. It’s long, and a little depressing: it’s a list of science fiction movies and TV shows, and you’re supposed to rate their scientific accuracy. I think I’m rather picky about that, so just about all of ’em got slammed when I did it.

I am conducting a small pilot-study on the properties of various sci-fi works (focusing on film and TV in particular). For the purposes of this study I designed two web forms (Web-form 1 & Web-form 2) that ask participants to rate sci-fi works in terms of different sci-fi properties. Web-form 1 asks how accurately a sci-fi work portrays scientific facts and Web-form 2 asks what the work’s general attitude towards science is. The number of sci-fi works that a participant is supposed to rate (121) is substantial (one needs about 20 min to complete one web-form) but it is necessary for the kinds of analyses I’d like to be able to do.

I am in dire need of study participants, as you might imagine. Specifically people who are above average in terms of scientific literacy and who are also fond of sci-fi. I’m more than certain your blog would provide me with just the right sample population- if you’d kindly »nudge« your »hordes« to go and fill out the two web-forms I provided:

WEB-FORM #1: Soft vs. Hard sci-fi
If you were born on an ODD day of the month (say the 1st, 3rd, 5th, 7th etc.) then please fill out VERSION 1 of web-form 1:
https://spreadsheets.google.com/viewform?hl=en&formkey=dDZXTmM0dFJnQm1sV2ZzWl8yblpkWHc6MQ#gid=0

If you were born on an EVEN day of the month( say the 2nd, 4th, 6th, 8th etc.) then please fill out VERSION 2 of web-form 1:
https://spreadsheets.google.com/viewform?hl=en&formkey=dDBSYWstbDB0bzlhM1FUMXkyZDBBTFE6MA#gid=0

WEB-FORM #2: Optimistic/Utopian vs. Pessimistic/Dystopian
If you were born on an ODD day of the month (say the 1st, 3rd, 5th, 7th etc.) then please fill out VERSION 1 of web-form 2:
https://spreadsheets.google.com/viewform?hl=en&formkey=dDNuWE9NdnBzeVpXeEdjcDVkVUxWM2c6MA#gid=0

If you were born on an EVEN day of the month( say the 2nd, 4th, 6th, 8th etc.) then please fill out VERSION 2 of web-form 2:
https://spreadsheets.google.com/viewform?hl=en&formkey=dExsMDhtWkRqbTN5b1J6aENYU1preGc6MA#gid=0

Important EXTRA instructions for participants:
Each of these two web-forms asks you to rate 121 different sci-fi works. While this may seem a lot it is also a prerequisite for a certain type of data analysis I’d like to do so please bare with me. The works are all English-language movies and TV series made in the period between 1950 and 2009. If one is at least a casual watcher of sci-fi most of these titles should be quite familiar. You will need about 20-25 min to complete one web form.

I’d kindly ask you to complete one web form in a single “run” (do not take big pauses when you are in the middle of it). You can complete the other web form after a break (even say the next day), but please do not forget to fill out BOTH forms or your input will be of very little value.

Please only fill out the forms once and please only fill out the VERSION appropriate for your birth date. The only reason for the different versions is so certain biases in the way the data is gathered will average out. The versions otherwise gather the SAME data and ask the SAME questions.


I hope I am not asking for something completely out of order and that you can help me gather enough participants. I am currently tethering on the edge of 30 respondents but this is nowhere near the number I’d need to get valid results.

With kind regards,
Jurij Dreo
Ljubljana, Slovenia

Entertaining and informative!

I’m really liking these CreatureCast videos Casey Dunn’s students put together — and there are two new ones, on moray eels and stomatopods. That’s communicating science!

Also, Dunn has a new book, Practical Computing for Biologists, also available on Amazon right now. I’m going to have to get a copy; it might be a good idea to introduce more students to the basics, too.

There are people meaner than I am

I got a surprising amount of criticism of my review of the arsenic-eating bacteria paper — some people thought I was too harsh and too skeptical and too cynical. Haven’t those people ever sat through a grad school journal club? We’re trained to eviscerate even the best papers, and I actually had to restrain myself a lot.

Anyway, I’m a pussycat. You want thorough skepticism, read Rosie Redfield’s drawing and quartering of the paper, which rips into the hasty methodology of the work. Man, after that, the body ain’t even twitching any more, and they’re going to have to clean up the pieces with a wet-vac. It’s beautiful.