[Update below]
One of the biggest ethical dilemmas facing science is that between their desire for free and open access to information and their desire to not cause harm.
Back in 2003, I assigned this essay prompt to my class to help them grapple with some of ethical issues that scientists face
You are working in a biological research laboratory that is doing research on finding cures for liver cancer using genetically modified forms of known viruses. After years of hard work, you create a new form of a virus that seems to be able to attack and destroy cancer cells, thus reversing the progress of this deadly disease and promising to provide a long-term cure. Not only would this discovery result in saving thousands of lives, it is such an important breakthrough that may even win a Nobel prize for it. You are really excited about this and plan to publish your results. But while in the process of checking your results to make sure there is no mistake, you discover that it is also possible to slightly modify the virus you created so that it becomes a rapidly proliferating mutant that is extremely lethal and kills people rapidly and painfully. In the wrong hands, this could become a terrifying biological weapon. At present, you are the only person who is aware of this more deadly possibility. Should you publish your discovery of the benign and beneficial form of the virus because of the potential benefits to so many people? Or should you suppress your findings altogether because the potential for harm is so great?
This situation seemed a little far-fetched even to me then, one of those artificial problems that teachers construct to make a point but recently news reports have emerged of an actual case like that.
Two teams of scientists working on the H5N1 ‘bird flu’ virus, while working on ways to understand and combat that disease, discovered a mutation that enabled it to transmit easily among mammals. There was concern that, if published, someone with malicious intent might create the mutant virus in order to create a pandemic. A scientific advisory board initially said that the papers should be published with significant redactions, with the full information only given to a few select researchers, but they have since reversed their stance and will allow publication. Ed Yong gives a good explanation of what is going on.
It turns out that there was a somewhat similar dilemma back in the 1970s involving the new field of genetic engineering where there were similar concerns about the possibility of creating cancer genes. The scientists who were then involved held off on publishing their work until a conference was held in 1975 to set guidelines on genetic engineering research. Those guidelines are still in effect.
These kinds of ethical issues are going to become increasingly problematic as our technology improves and the line between what is ‘natural’ and what is human-made becomes blurred.
[Update: Marshall in the comments provides a link to a thoughtful article on this topic by Peter Palese, a researcher who works with deadly pathogens, and is thus directly affected by this debate.]
dianne says
Publish. If you can find it, so can someone else. Especially now that the important part (that there is something to be found) has been revealed. The more people the information is available to, the greater the chance of finding a treatment for the mutant virus, should someone make it.
Marshall says
Peter Palese from Mt. Sinai (where I’m a graduate student) had what I thought was a great rebuttal of exactly this problem--censoring papers because they might potential enable production of harmful viruses.
You can read it here: http://www.nature.com/news/don-t-censor-life-saving-science-1.9777
mnb0 says
I completely agree -- publish while stressing the dangers involved. It’s the only remedy -- others will go at work to take measures for danger control.
jamessweet says
I think the guiding principle in situations such as this should be to proceed cautiously, disclose gradually, but with the eye to eventually have full and open publication. As to what that translates to specifically in each individual circumstance, that’s really the tricky part I guess. But the overarching principle I think is clear: Publish, but not recklessly, and not necessarily immediately.
As others have pointed out, if one person discovered it, then odds are someone else will discover it (or something equivalent) eventually. The best opportunity for finding a defense is full publication, so that the maximum number of researchers can be working on the problem.
A further benefit, of course, is that if a “good guy” is the first to discover, you can manage the disclosure. You can look to see if risk can be minimized by first selectively sharing the knowledge with those in a position to defend against it most effectively, making preparations before you go ahead with full disclosure.
It’s analogous in many ways to computer security, except that in that case it’s usually certain that a gap can be closed… but in any case, the responsible thing to do when you find a serious security vulnerability is to notify the people responsible for the vulnerability, and then give them a fair amount of time to address the vulnerability before going public. This model works great when those responsible take the vulnerability seriously.
But if they don’t, paradoxically perhaps the responsible thing to do is to go ahead and publish. If you don’t, then eventually some nefarious person will discover the same vulnerability, and exploit it without publishing it, and then nobody knows what hit them.
You cannot rely on security through obscurity, whether we’re talking about information technology or biotechnology. But at the same time, I do think it’s irresponsible to just immediately publish a vulnerability without starting with a more gradual and selective disclosure, giving those in a position to respond a fair opportunity to do so.
Mark says
I pretty much completely agree with Peter Palese’s conclusions, but I have a couple points to add to the dicussion which weren’t covered.
First, the method used to generate the “super-flu” virus was serial passaging, literally the oldest and simplest method of studying pathogen evolution ever. You give a ferret virus, it dies, and you give the surviving virus to another ferret. Biologists have been doing this kind of thing for over a century, and it’s far from a secret -- any schmuck with a couple ferrets and a toothpick can replicate the experiment. So there is literally nothing in the paper that will enable an act of terrorism, nor which would prevent terrorism if the paper is withheld.
Second, the odds are, nature has already replicated the experiment. If you can make it in a couple generations of ferrets, nature can do the same thing. Only nature isn’t using a handful of ferrets; it’s got millions of birds, pigs, rodents, humans, and other mammals worldwide in which to try this experiment daily. That virus exists out there, on a farm or in a tree or in a hole in the ground. If it’s really the supervirus every mainstream news outlet on the planet wants us to believe it is, then it’s only a matter of time before we get a patient zero.
It’s an interesting hypothetical, but the reality of the influenza paper doesn’t even come close to the ethical dilemma everyone wants it to be.
Dianne says
nature has already replicated the experiment.
I remember a line from a lecture I went to a few years ago about bioterrorism and the money spent on counterterrorism. They presented a bunch of data that suggested that the chances of a major attack by terrorists using engineered viruses or bacteria were quite low. Then asked, “So is this money wasted?” Their answer was, “No. Because nature is the world’s biggest bioterrorist.”
And so it is. If we have to call it counterterrorism to get the government to cough up money to study the ways infectious diseases mutate, so be it.