A Computer Scientist Reads EvoPsych, Part 2

[Part 1]

the concept of “learning” within the Standard Social Science Model itself tacitly invokes unbounded rationality, in that learning is the tendency of the general-purpose, equipotential mind to grow—by an unspecified and undiscovered computational means—whatever functional information-processing abilities it needs to serve its purposes, given time and experience in the task environment.

Evolutionary psychologists depart from fitness teleologists, traditional economists (but not neuroeconomists), and blank-slate learning theorists by arguing that neither human engineers nor evolution can build a computational device that exhibits these forms of unbounded rationality, because such architectures are impossible, even in principle (for arguments, see Cosmides & Tooby, 1987; Symons 1989, 1992; Tooby & Cosmides, 1990a, 1992).[1]

Yeah, these people don’t know much about computer science.

You can divide the field of “artificial” intelligence into two basic approaches. The top-down approach outlined modular code routines like “recognize faces,” then broke those down into sub-tasks like “look for eyes” and “find mouths.” By starting at a high level and dividing these things down into neat, tidy sub-programs, we can chain them together and create a greater whole.

It’s never worked all that well, at least for real-life problems. Take Cyc, the best example I can think of. It takes basic facts about the world, like “water is wet” or “rain is water,” and uses a simple set of rules to query these facts (“is rain wet?”). What it can’t do is make guesses (“are clouds wet?”), nor discover new facts on its own, nor handle anything but simple text. Thirty years and millions of dollars haven’t made a dent in those problems.

Meanwhile, the graphics card manufacturer NVidia is betting the farm on something called “deep learning,” one of several “bottom-up” approaches. You present the algorithm with an image (or sound file or object, the number of dimensions can be easily changed), and it maps it to a grid of cells. You toss a slightly smaller grid of cells on top of it, and for each new cell you calculate a weighted sum of the nearby values in the previous grid, weights that are random to start off with. Repeat this several times, and you’ll wind up with a single cell at the end. Assign this cell to an output, say “person,” then rewind all the way back to the start. Wash, rinse, and repeat until you get another single cell, then at least enough single cells to handle every possible solution. All of these single cells have a value associated with them, so that “person” cell might give the image 0.7 “person”s. Having cataloged what’s in the image already, you know there’s actually 1.0 “person” there, and so you propagate that information back down the chain. Prior cell weights which were pro-person are increased, while the anti-person ones are decreased. Do this right to the bottom, and for every input cell, then repeat the process for a new image.

It’s loosely patterned after how our own neurons are laid out. Biology is a bit more liberal with how it connects, but this structure has the virtue of being easy to calculate and massively parallel, quite convenient for a company which manufactures processors that specialize in massively parallel computations. NVidia’s farm-betting comes from the fact that it’s wildly successful; all of the best image recognition algorithms follow the deep-learning pattern, and their success rates are not only impressive but also resemble our own.[2]

Heard of the AI that could play Atari games? Emphasis mine:

Our [Deep action-value Network or DQN] method outperforms the best existing reinforcement learning methods on 43 games without incorporating any of the additional prior knowledge about Atari 2600 games used by other approaches … . Furthermore, our DQN agent performed at a level that was comparable to that of a professional human games tester across the set of 49 games, achieving more than 75% of the human score on more than half of the games […]

Indeed, in certain games DQN is able to discover a relatively long-term strategy (for example, Breakout: the agent learns the optimal strategy, which is to first dig a tunnel around the side of the wall allowing the ball to be sent around the back to destroy a large number of blocks; …). […]

In this work, we demonstrate that a single architecture can successfully learn control policies in a range different environments with only very minimal prior knowledge, receiving only the pixels and the game score as inputs, and using the same algorithm, network architecture and hyperparameters each game, privy only to the inputs a human player would have.[3]

This deep learning network has no idea what a video game is, nor is it permitted to peek at the innards of the game itself, yet can not only learn to play these games at the same level as human beings, it can develop non-trivial solutions to them. You can’t get more “blank slate” than that.

This basic pattern has repeated multiple times over the decades. Neural nets aren’t as zippy as the new kid on the “bottom-up” block, yet they too have had great success where the modular top-down approach has failed miserably. I haven’t worked with either technology, but I’ve worked with something that’s related: genetic algorithms. Represent your solutions in a sort of genome, come up with a fitness metric for them, then mutate or randomly construct those genomes and keep the fittest ones in the pool until you’ve tried every possibility, or you get bored. Two separate runs might converge to the same solution, or they might not. A lot depends on the “fitness landscape” they occupy, which you can visualize as a 3D terrain map with height representing how “fit” something is.

A visualization of three "evolutionary fitness landscapes," ranging from simple to complex to SUPER complex.That landscape has probably got more than three dimensions, but those aren’t as easy to visualize and they behave very similarily to the 3D case. The terrain might be a Mount Fiji with a single solution at the top of a fitness peak, or a Himalayas with many peak solutions scattered about but a single tallest standing above them, or a foothills where solutions are aplenty but the best solution is tough to find.

All of these take the “bottom-up” approach, the opposite of the “top-down” one, and work up from very small components towards a high-level goal. The path to there is rarely known in advance, so the system “feels” its way there via evolutionary algorithms.

That path may not go the way you expect, however. Take the case of a researcher, Dr. Adrian Thompson, who used an evolutionary algorithm to find the smallest computer processor that could sense the difference between two tones.

Finally, after just over 4,000 generations, the test system settled upon the best program. When Dr. Thompson played the 1kHz tone, the microchip unfailingly reacted by decreasing its power output to zero volts. When he played the 10kHz tone, the output jumped up to five volts. He pushed the chip even farther by requiring it to react to vocal “stop” and “go” commands, a task it met with a few hundred more generations of evolution. As predicted, the principle of natural selection could successfully produce specialized circuits using a fraction of the resources a human would have required. And no one had the foggiest notion how it worked.

Dr. Thompson peered inside his perfect offspring to gain insight into its methods, but what he found inside was baffling. The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest— with no pathways that would allow them to influence the output— yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.

It seems that evolution had not merely selected the best code for the task, it had also advocated those programs which took advantage of the electromagnetic quirks of that specific microchip environment. The five separate logic cells were clearly crucial to the chip’s operation, but they were interacting with the main circuitry through some unorthodox method— most likely via the subtle magnetic fields that are created when electrons flow through circuitry, an effect known as magnetic flux. There was also evidence that the circuit was not relying solely on the transistors’ absolute ON and OFF positions like a typical chip; it was capitalizing upon analogue shades of gray along with the digital black and white.[4]

Evolutionary approaches are very simple and require no understanding or insight into the problem you’re solving, but they usually requires ridiculous amounts of computation or training merely to keep pace with the top-down “modular” approach. The fitness function may lead to a solution much too complicated for you to understand or much too fragile to operate anywhere but where it was generated. But the bottom-up approach may be your only choice for certain problems.

The moral of the story: the ability to do complex calculation can be built up from a blank slate, in principle and practice. When we follow the bottom-up approach we tend to get results that more closely mirror biology than when we work from the top-down and modularize, though this is less insightful than it first appears. Nearly all bottom-up approaches take direct inspiration from biology, whereas top-down approaches owe more to Plato then Aristotle.

Biology prefers the blank slate.

[Part 3]


[1] Tooby, John, and Leda Cosmides. “Conceptual Foundations of Evolutionary Psychology.The Handbook of Evolutionary Psychology (2005): 5-67.

[2] Kheradpisheh, Saeed Reza, et al. “Deep Networks Resemble Human Feed-forward Vision in Invariant Object Recognition.” arXiv preprint arXiv:1508.03929 (2015).

[3] Mnih, Volodymyr, et al. “Human-level control through deep reinforcement learning.” Nature 518.7540 (2015): 529-533.

[4] Bellows, Alan. “On the Origin of Circuits • Damn Interesting.” Accessed May 4, 2016.

A Computer Scientist Reads EvoPsych, Part 1

Computer Science is weird. Most of the papers published in my field look like this:

We describe the architecture of a novel system for precomputing sparse directional occlusion caches. These caches are used for accelerating a fast cinematic lighting pipeline that works in the spherical harmonics domain. The system was used as a primary lighting technology in the movie Avatar, and is able to efficiently handle massive scenes of unprecedented complexity through the use of a flexible, stream-based geometry processing architecture, a novel out-of-core algorithm for creating efficient ray tracing acceleration structures, and a novel out-of-core GPU ray tracing algorithm for the computation of directional occlusion and spherical integrals at arbitrary points.[1]

A speed improvement of two orders of magnitude is pretty sweet, but this paper really isn’t about computers per se; it’s about applying existing concepts in computer graphics in a novel combination to solve a practical problem. Most papers are all about the application of computers, and not computing itself. You can dig up examples of the latter, like if you try searching for concurrency theory,[2] but even then you’ll run across a lot of applied work, like articles on sorting algorithms designed for graphics cards.[3]

In sum, computer scientists spend most of their time working in other people’s fields, solving other people’s problems. So you can imagine my joy when I stumbled on people in other fields invoking computer science.

Because the evolved function of a psychological mechanism is computational—to regulate behavior and the body adaptively in response to informational inputs—such a model consists of a description of the functional circuit logic or information processing architecture of a mechanism (Cosmides & Tooby, 1987; Tooby & Cosmides, 1992). Eventually, these models should include the neural, developmental, and genetic bases of these mechanisms, and encompass the designs of other species as well.[4]

Hot diggity! How well does that non-computer-scientist understand the field, though? Let’s put my degree to work.

The second building block of evolutionary psychology was the rise of the computational sciences and the recognition of the true character of mental phenomena. Boole (1848) and Frege (1879) formalized logic in such a way that it became possible to see how logical operations could be carried out mechanically, automatically, and hence through purely physical causation, without the need for an animate interpretive intelligence to carry out the steps. This raised the irresistible theoretical possibility that not only logic but other mental phenomena such as goals and learning also consisted of formal relationships embodied nonvitalistically in physical processes (Weiner, 1948). With the rise of information theory, the development of the first computers, and advances in neuroscience, it became widely understood that mental events consisted of transformations of structured informational relationships embodied as aspects of organized physical systems in the brain. This spreading appreciation constituted the cognitive revolution. The mental world was no longer a mysterious, indefinable realm, but locatable in the physical world in terms of precisely describable, highly organized causal relations.[4]

Yes! I’m right with you here. One of the more remarkable findings of computer science is that every computation or algorithm can be executed on a Turing machine. That includes all of Quantum Field Theory, even those Quant-y bits. While QFT isn’t a complete theory, we’re extremely confident in the subset we need to simulate neural activity. Those simulations have long since been run and matched against real-world data, the current problem is scaling up from faking a million neurons at a time to faking 100 billion, about as many as you have locked in your skull.

Our brains can be perfectly simulated by a computational device, and our brain’s ability to do math proves they are computational. I can quibble a bit on the wording (“precisely describable” suggests we’ve faked those 100 billion), but we’re off to a great start here.

After all, if the human mind consists primarily of a general capacity to learn, then the particulars of the ancestral hunter-gatherer world and our prehuman history as Miocene apes left no interesting imprint on our design. In contrast, if our minds are collections of mechanisms designed to solve the adaptive problems posed by the ancestral world, then hunter-gatherer studies and primatology become indispensable sources of knowledge about modern human nature.[4]

Wait, what happened to the whole “our brains are computers” thing? Look, here’s a diagram of a Turing machine.

An annotated diagram of a Turing machine. Copyright HJ Hornbeck 2016, CC-BY-SA 4.0.

A read/write head sits somewhere along an infinite ribbon of tape. It reads what’s under the head, writes back a value to that location, and moves either left or right, all based on that value. How does it know what to do? Sometimes that’s hard-wired into the machine, but more commonly it reads instructions right off the tape. These machines don’t ship with much of anything, just the bare minimum necessary to do every possible computation.

This carries over into physical computers as well; when the CPU of your computer boots up, it does the following:

  1. Read the instruction at memory location 4,294,967,280.
  2. Execute it.

Your CPU does have “programs” of a sort, instructions such as ADD (addition) or MULT (multiply), but removing them doesn’t remove its ability to compute. All of those extras can be duplicated by grouping together simpler operations, they’re only there to make programmers’ lives easier.

There’s no programmer for the human brain, though. Despite what The Matrix told you, no-one can fiddle around with your microcode and add new programs. There is no need for helper instructions. So if human brains are like computers, and computers are blank slates for the most part, we have a decent reason to think humans are blank slates too, infinitely flexible and fungible.

[Part 2]


[1] Pantaleoni, Jacopo, et al. “PantaRay: fast ray-traced occlusion caching of massive scenes.” ACM Transactions on Graphics (TOG). Vol. 29. No. 4. ACM, 2010.

[2] Roscoe, Bill. “The theory and practice of concurrency.” (1998).

[3] Ye, Xiaochun, et al. “High performance comparison-based sorting algorithm on many-core GPUs.” Parallel & Distributed Processing (IPDPS), 2010 IEEE International Symposium on. IEEE, 2010.

[4] Tooby, John, and Leda Cosmides. “Conceptual Foundations of Evolutionary Psychology.The Handbook of Evolutionary Psychology (2005): 5-67.

EvoPsych, the PoMo-iest of them all

One last thing.

Feminism comes under fire for being “post-modernist,” a sort of loosy-goosy subject which allows for all sorts of contradictions and disconnects from reality. Evolutionary Psychology is held up as being on much firmer ground, in contrast. What is EvoPsych, exactly? Let’s ask David Buss, the most-cited researcher in the field:

  1. Manifest behavior depends on underlying psychological mechanisms, information processing devices housed in the brain, in conjunction with the external and internal inputs — social, cultural, ecological, physiological — that interact with them to produce manifest behavior;
  2. Evolution by selection is the only known causal process capable of creating such complex organic mechanisms (adaptations);
  3. Evolved psychological mechanisms are often functionally specialized to solve adaptive problems that recurred for humans over deep evolutionary time;
  4. Selection designed the information processing of many evolved psychological mechanisms to be adaptively influenced by specific classes of information from the environment;
  5. Human psychology consists of a large number of functionally specialized evolved mechanisms, each sensitive to particular forms of contextual input, that get combined, coordinated, and integrated with each other and with external and internal variables to produce manifest behavior tailored to solving an array
    of adaptive problems.

This is already off to a bad start, as Myers has pointed out in another context.

complex traits are the product of selection? Come on, John [Wilkins], you know better than that. Even the creationists get this one right when they argue that there may not be adaptive paths that take you step by step to complex innovations, especially not paths where fitness doesn’t increase incrementally at each step. Their problem is that they don’t understand any other mechanisms at all well (and they don’t understand selection that well, either), so they think it’s an evolution-stopper — but you should know better.

But I’m not really here to push back on that line. It’s these bits further on that intrigue me:

These basic tenets render it necessary to distinguish between “evolutionary psychology” as a meta-theory for psychological science and “specific evolutionary hypotheses” about particular phenomena, such as conceptual proposals about aggression, resource control, or particular strategies of human mating. Just as the bulk of scientific research in the field of non-human behavioral ecology tests specific hypotheses about evolved mechanisms in animals, the bulk of scientific research in evolutionary psychology tests specific hypotheses about evolved psychological mechanisms in humans, hypotheses about byproducts of adaptations, and occasionally hypotheses about noise (e.g., mutations). […]

Evolutionary psychology is a meta-theoretical paradigm that provides a synthesis of modern principles of evolutionary biology with modern understandings of psychological mechanisms as information processing devices (Buss 1995b; Tooby and Cosmides 1992). Within this meta-theoretical paradigm, there are at least four distinct levels of analysis — general evolutionary theory, middle-level evolutionary theories, specific evolutionary hypotheses, and specific predictions derived from those hypotheses (Buss 1995b). In short, there is no such thing as “evolutionary psychology theory,” nor is there “the” evolutionary psychological hypothesis about any particular phenomenon.

Wait, EvoPsych is a “meta-theoretical paradigm?” That would place it above theories like Quantum Chromodynamics, Plate Tectonics, Evolution, Maslow’s Hierarchy of Needs, and Logotherapy. Buss appears to consider EvoPsych more like Physics or Psychology, categories that we’ve drawn around certain sets of theories. But “Physics” the category makes no claim about how the world works. You can’t derive General Relativity from Physics, photons from “the way material and energy evolve.” Categories are just labels. The fact that Buss could list five assertions of EvoPsych means it is not a label, though, but a theory after all.

Buss is speaking in word salad! But he’s a major figure in EvoPsych, oft-cited and with decades of experience.

I’ve already explained how Evolutionary Psychology is based on a deep misunderstanding of evolution, but it really has nothing to do with psychology, either: where do they reference contemporary psychoanalysis? Scan over Buss’ deep summary, and you won’t see any mention of Behaviorism, Kohiberg’s Moral Development, or Attachment Theory. EvoPsych was not created by psychologists, nor does it draw from their theories; instead, it was created by biologists like Robert Trivers or E.O. Wilson, working with simplified mathematical models and personal observation. It doesn’t consider what people are thinking, and despite claiming otherwise Buss will go on to show his true colours:

Three articles in this special issue attempt to provide empirical evidence, some new and some extracted from the existing empirical literature, pertaining to one of the nine hypotheses of Sexual Strategies Theory — that gender differences in minimal levels of obligate parental investment should lead short-term mating to represent a larger component of men’s than women’s sexual strategies. This hypothesis derives straightforwardly from Trivers’s (1972) theory of parental investment, which proposed that the sex that invested less in offspring (typically, but not always males), tends to evolve adaptations to be more competitive with members of their own sex for sexual access to the more valuable members of the opposite sex.

So EvoPsych is a biology theory that doesn’t understand basic biology, and a psychological theory developed independent of psychology.

The lack of coherency bleeds through the entire project: an EvoPsych textbook is a parade of tiny “specific evolutionary hypotheses,” disconnected from one another. This makes them easily discarded and interchanged, like chess pawns protecting the king. David Buss once said aggression in women did not exist, and wasn’t worthy of study, but two decades on was studying it and argued they were equally aggressive but differed in the kinds of aggression they showed. Buss will flatly assert hunting requires mental rotation skill, gathering requires spatial memory skill, and therefore the sex differences in those skills are due to sexual selection over time. Consider this theory instead:

It’s probable humans typically hunted small game, since setting up snares is easy and cheap, as is killing a pinned animal. Effectively capturing a lot of food required not only setting out many traps, though, but remembering where they were.

In contrast, plant food tends to stay in one place, and over time well-worn foot paths would develop between food spots. This made navigation easy, so long as you could memorize and rotate angles effectively to remember which path you came from. As plants tend to bloom seasonally, you’d also need to keep track of time. Star calendars and constellations were the obvious choice, but in order to read them you had to be able to cope with rotated shapes.

Based on the observed sex differences, and assuming they were the result of sexual selection, women must have been the hunters in prehistoric societies, while men were delegated to do the gathering.

The conclusion is completely at odds with what most EvoPsych researchers propose, yet it uses their exact same methods. Merely by shifting the focus around, I can easily come up with theories that contradict EvoPsych claims. As EvoPsych is a “meta-theory,” though, falsifying every single “specific evolutionary hypothesis” would fail to falsify it. EvoPsych is thus unfalsifiable, even though it makes empirically-testable assertions about human evolution!

Feminism, in contrast, is much more like Physics. It too is a category, defined as the study and removal of sexism.

But what constitutes sexism? Early theorists proposed Patriarchy theory, that society is structured to disproportionately favor men. Starting the 1970’s, though, a number of people began arguing for a role-based or performative view: society creates gender roles that we’re expected to conform to, whatever our sex, gender, or sexuality. This might seem to contradict the prior view, as men can now be the victim of sexism, but it’s no worse than what you see in harder sciences. Aristotle thought everything was attracted to the centre of the universe; Newton thought objects had mass, which attracted other objects with mass through an all-pervasive force; Einstein thought everything traveled in straight lines, it’s just that mass bends space and gives the appearance of a force. All three are radically different in detail, but they all give the same general prediction: things fall to Earth. Likewise, both Patriarchy and role-based theories differ in detail, but agree in general. This makes Feminism-the-category coherent, as there’s substantial overlap between all the theories it contains. There’s something tangible there, which no amount of theory-churn removes.

EvoPsych is a theory masquerading as a “meta-theory,” making specific assertions about the world yet denying it is falsifiable. Practitioners propose an endless stream of “specific evolutionary hypotheses,” which are only coherent with each other because they’re heavily influenced by the cultural experience of the people making them. It is far more post-modern than feminism, but because it goes easy on the jargon it doesn’t appear that way at first blush.

[HJH 2015/03/25: Added the following]

Hmmm, having mulled this over for a day, I think those last few paragraphs were grasping at something I couldn’t quite put my finger on at the time. I think I have it securely pinned now.

Simple question: can you describe performative theory without referring to feminism? Sure, I’ve done it already: “society creates gender roles that we’re expected to conform to, whatever our sex, gender, or sexuality.” Categories are simplifications; if we were to recursively define “the study and removal of sexism” to ever-greater degrees, at some level we’d start describing performative theory.

Now, can you describe Sexual Strategies Theory without referring to EvoPsych’s five core tenants? Nope, because it depends on mind modules, hyper-adaptationalism, and the rest of Buss’ list to make any sense. EvoPsych isn’t a meta-theory to SST, it’s a sub-theory, a lemma. It’s not a simplification or over-arching category, because even if we clarified all the core parts to an arbitrary degree, SST wouldn’t pop out.

Even more confusingly, Parental Investment theory is neither a category containing EvoPsych (as there’s no mind modules buried in there) nor a sub-theory of EvoPsych (because it doesn’t depend on mind modules to make sense). It’s not part of the paradigm at all, even though it helped spawn the field via a paper of Robert Trivers and is frequently cited by researchers.

Buss could make a better case for SST being a “meta-theoretical paradigm,” yet he thinks it’s a part of EvoPsych. It’s more evidence the guy has no clue what he’s saying.

It’s About Ethics in Biomedical Research

I’m a bit surprised this didn’t get more play. From what I hear, Pinker has some beef with bioethics.

Biomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.

Get out of the way.

A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future.

This path leads to very dark places. I’ll quote a summary I wrote of Blumenthal (2004).[1]

Booker T. Washington had an ambitious plan around the turn of the century, of rapidly advancing the health and welfare of African Americans in that city. His Tuskegee Institute revived agriculture in the South, build schools and business alliances, created a self-sustaining architectural program, and developed a Black-owned-and-operated hospital.

It also took a keen interest in health issues, and after World War I it faced a major crisis in syphilis. Soldiers returning home led to a dramatic spike in cases, and as of 1926 as many as 36% of everyone within the surrounding Macon County were infected. The best cure, at the time, was a six-week regimen of toxic drugs with a depressing 30% success rate. Something had to be done.

A short study of six to eight months was proposed, the idea being to track the progression of the disease in African-Americans and learn more about it, then administer treatment. It got full approval of the government, health officials, and local leaders in the African-American community. Substantial outreach was done to bring in patients, explain what the disease was, and even give them free rides to reach the clinic.

But then… circumstances changed. The newly appointed leader of the project, Dr. Raymond Vonderlehr, became fascinated with how syphilis changed people’s bodies. The Great Depression hit, and as of 1933 there wasn’t a lot of money available for treatment. So Vonderlehr decided to make the study longer, and provide less than the recommended treatment. He also faced the problem of getting subjects to agree to the toxic treatments and painful diagnostic tools, but that was easily solved: stretch the truth, just a bit. Those spinal taps they used to diagnose syphilis spread to the neural system became “free special treatment,” even though no actual treatment was done. Disaster struck when other scientists discovered the first effective cure, penicillin; elaborate “procedures” were developed to keep the patients from getting their hands on the drug, even if other infectious diseases threatened their lives.

And the entire time, the project had the full support of the government, and published their results openly.

After the entire incident exploded in the press, a commission of experts were formed to advise the US government on bioethical legislation. The result was the Belmont Report, and one of the three core principals it rested on was

Justice. — Who ought to receive the benefits of research and bear its burdens? This is a question of justice, in the sense of “fairness in distribution” or “what is deserved.” […]

Questions of justice have long been associated with social practices such as punishment, taxation and political representation. Until recently these questions have not generally been associated with scientific research. However, they are foreshadowed even in the earliest reflections on the ethics of research involving human subjects. For example, during the 19th and early 20th centuries the burdens of serving as research subjects fell largely upon poor ward patients, while the benefits of improved medical care flowed primarily to private patients. […]

Against this historical background, it can be seen how conceptions of justice are relevant to research involving human subjects. For example, the selection of research subjects needs to be scrutinized in order to determine whether some classes (e.g., welfare patients, particular racial and ethnic minorities, or persons confined to institutions) are being systematically selected simply because of their easy availability, their compromised position, or their manipulability, rather than for reasons directly related to the problem being studied. Finally, whenever research supported by public funds leads to the development of therapeutic devices and procedures, justice demands both that these not provide advantages only to those who can afford them and that such research should not unduly involve persons from groups unlikely to be among the beneficiaries of subsequent applications of the research.

Ignoring social justice concerns in biomedical research led to things like the Tuskegee experiment. The scientific establishment has since tried to correct that by making it a critical part. Pinker would be wise to study the history a bit more carefully, here.

But don’t just take my word for it. Others have also called him out, like Matthew Beard

Let’s put aside the fact that one paragraph later Pinker casts doubt on our ability to make accurate predictions at all. Because there is an interesting question here.

Let’s assume that hand-wringing ethicists slow progress that cures diseases. As a result, animals aren’t subjected to painful experiments, patients’ autonomy is respected, and “justice” is upheld. At the same time, lots of people died who could otherwise have been saved. Surely, Pinker suggests, this is unethical.

Only under a certain framework, known as utilitarianism, in which the right action is the one that does the most good. And even then, only under certain conditions. For instance, although some research might have saved more lives without ethical constraints, Pinker wants all oversight removed.

Thus, even bad research will operate without ethical restraint. For each pioneering piece of research that saves lives there will be much more insignificant research. And each of these insignificant items will also entail ethical breaches. This makes Pinker’s utilitarian matrix much harder to compute.

… and Wesley J. Smith.

These general principles [than Pinker excludes] are essential to maintaining a moral medical research sector! Indeed, without them, we would easily slouch into a crass utilitarianism that would blatantly treat some human beings as objects instead of subjects.

Bioethics is actually rife with such proposals. For example, one research paper published in a respected journal proposed using unconscious patients as “living cadavers” to test the safety of pig-to-human organ xenotransplantation.

The best defences of Pinker I’ve seen ignored the bit where he dismissed “social justice” and pretended he was discussing less basic things. It doesn’t reflect well on Pinker.


[1] Blumenthal, Daniel S., and Ralph J. DiClemente, eds. Community-based health research: issues and methods. Springer publishing company, 2004. pg. 48-53