Ray Kurzweil does not understand the brain

There he goes again, making up nonsense and making ridiculous claims that have no relationship to reality. Ray Kurzweil must be able to spin out a good line of bafflegab, because he seems to have the tech media convinced that he’s a genius, when he’s actually just another Deepak Chopra for the computer science cognoscenti.

His latest claim is that we’ll be able to reverse engineer the human brain within a decade. By reverse engineer, he means that we’ll be able to write software that simulates all the functions of the human brain. He’s not just speculating optimistically, though: he’s building his case on such awfully bad logic that I’m surprised anyone still pays attention to that kook.

Sejnowski says he agrees with Kurzweil’s assessment that about a million lines of code may be enough to simulate the human brain.

Here’s how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.

I’m very disappointed in Terence Sejnowski for going along with that nonsense.

See that sentence I put in red up there? That’s his fundamental premise, and it is utterly false. Kurzweil knows nothing about how the brain works. It’s design is not encoded in the genome: what’s in the genome is a collection of molecular tools wrapped up in bits of conditional logic, the regulatory part of the genome, that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell:cell interactions, of which we understand only a tiny fraction. The end result is a brain that is much, much more than simply the sum of the nucleotides that encode a few thousand proteins. He has to simulate all of development from his codebase in order to generate a brain simulator, and he isn’t even aware of the magnitude of that problem.

We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently. We haven’t even solved the sequence-to-protein-folding problem, which is an essential first step to executing Kurzweil’s clueless algorithm. And we have absolutely no way to calculate in principle all the possible interactions and functions of a single protein with the tens of thousands of other proteins in the cell!

Let me give you a few specific examples of just how wrong Kurzweil’s calculations are. Here are a few proteins that I plucked at random from the NIH database; all play a role in the human brain.

First up is RHEB (Ras Homolog Enriched in Brain). It’s a small protein, only 184 amino acids, which Kurzweil pretends can be reduced to about 12 bytes of code in his simulation. Here’s the short description.

MTOR (FRAP1; 601231) integrates protein translation with cellular nutrient status and growth signals through its participation in 2 biochemically and functionally distinct protein complexes, MTORC1 and MTORC2. MTORC1 is sensitive to rapamycin and signals downstream to activate protein translation, whereas MTORC2 is resistant to rapamycin and signals upstream to activate AKT (see 164730). The GTPase RHEB is a proximal activator of MTORC1 and translation initiation. It has the opposite effect on MTORC2, producing inhibition of the upstream AKT pathway (Mavrakis et al., 2008).

Got that? You can’t understand RHEB until you understand how it interacts with three other proteins, and how it fits into a complex regulatory pathway. Is that trivially deducible from the structure of the protein? No. It had to be worked out operationally, by doing experiments to modulate one protein and measure what happened to others. If you read deeper into the description, you discover that the overall effect of RHEB is to modulate cell proliferation in a tightly controlled quantitative way. You aren’t going to be able to simulate a whole brain until you know precisely and in complete detail exactly how this one protein works.

And it’s not just the one. It’s all of the proteins. Here’s another: FABP7 (Fatty Acid Binding Protein 7). This one is only 132 amino acids long, so Kurzweil would compress it to 8 bytes. What does it do?

Anthony et al. (2005) identified a Cbf1 (147183)-binding site in the promoter of the mouse Blbp gene. They found that this binding site was essential for all Blbp transcription in radial glial cells during central nervous system (CNS) development. Blbp expression was also significantly reduced in the forebrains of mice lacking the Notch1 (190198) and Notch3 (600276) receptors. Anthony et al. (2005) concluded that Blbp is a CNS-specific Notch target gene and suggested that Blbp mediates some aspects of Notch signaling in radial glial cells during development.

Again, what we know of its function is experimentally determined, not calculated from the sequence. It would be wonderful to be able to take a sequence, plug it into a computer, and have it spit back a quantitative assessment of all of its interactions with other proteins, but we can’t do that, and even if we could, it wouldn’t answer all the questions we’d have about its function, because we’d also need to know the state of all of the proteins in the cell, and the state of all of the proteins in adjacent cells, and the state of global and local signaling proteins in the environment. It’s an insanely complicated situation, and Kurzweil thinks he can reduce it to a triviality.

To simplify it so a computer science guy can get it, Kurzweil has everything completely wrong. The genome is not the program; it’s the data. The program is the ontogeny of the organism, which is an emergent property of interactions between the regulatory components of the genome and the environment, which uses that data to build species-specific properties of the organism. He doesn’t even comprehend the nature of the problem, and here he is pontificating on magic solutions completely free of facts and reason.

I’ll make a prediction, too. We will not be able to plug a single unknown protein sequence into a computer and have it derive a complete description of all of its functions by 2020. Conceivably, we could replace this step with a complete, experimentally derived quantitative summary of all of the functions and interactions of every protein involved in brain development and function, but I guarantee you that won’t happen either. And that’s just the first step in building a simulation of the human brain derived from genomic data. It gets harder from there.

I’ll make one more prediction. The media will not end their infatuation with this pseudo-scientific dingbat, Kurzweil, no matter how uninformed and ridiculous his claims get.

(via Mo Constandi)


I’ve noticed an odd thing. Criticizing Ray Kurzweil brings out swarms of defenders, very few of whom demonstrate much ability to engage in critical thinking.

If you are complaining that I’ve claimed it will be impossible to build a computer with all the capabilities of the human brain, or that I’m arguing for dualism, look again. The brain is a computer of sorts, and I’m in the camp that says there is no problem in principle with replicating it artificially.

What I am saying is this:

Reverse engineering the human brain has complexities that are hugely underestimated by Kurzweil, because he demonstrates little understanding of how the brain works.

His timeline is absurd. I’m a developmental neuroscientist; I have a very good idea of the immensity of what we don’t understand about how the brain works. No one with any knowledge of the field is claiming that we’ll understand how the brain works within 10 years. And if we don’t understand all but a fraction of the functionality of the brain, that makes reverse engineering extremely difficult.

Kurzweil makes extravagant claims from an obviously extremely impoverished understanding of biology. His claim that “The design of the brain is in the genome”? That’s completely wrong. That makes him a walking talking demo of the Dunning-Kruger effect.

Most of the functions of the genome, which Kurzweil himself uses as the starting point for his analysis, are not understood. I don’t expect a brain simulator to slavishly imitate every protein, but you will need to understand how the molecules work if you’re going to reverse engineer the whole.

If you’re an acolyte of Kurzweil, you’ve been bamboozled. He’s a kook.

By the way, this story was picked up by Slashdot and Gizmodo.

The secret life of babies

Years ago, when the Trophy Wife™ was a psychology grad student, she participated in research on what babies think. It was interesting stuff because it was methodologically tricky — they can’t talk, they barely respond in comprehensible way to the world, but as it turns out you can get surprisingly consistent, robust results from techniques like tracking their gaze, observing how long they stare at something, or even the rate at which they suck on a pacifier (Maggie, on The Simpsons, is known to communicate quite a bit with simple pauses in sucking.)

There is a fascinating article in the NY Time magazine on infant morality. Set babies to watching puppet shows with nonverbal moral messages acted out, and their responses afterward indicate a preference for helpful agents and an avoidance of hindering agents, and they can express surprise and puzzlement when puppet actors make bad or unexpected choices. There are rudiments of moral foundations churning about in infant brains, things like empathy and likes and dislikes, and they acquire these abilities untaught.

This, of course, plays into a common argument from morality for religion. It’s unfortunate that the article cites deranged dullard Dinesh D’Souza as a source — is there no more credible proponent of this idea? That would say volumes right there — but at least the author is tearing him down.

A few years ago, in his book “What’s So Great About Christianity,” the social and cultural critic Dinesh D’Souza revived this argument [that a godly force must intervene to create morality]. He conceded that evolution can explain our niceness in instances like kindness to kin, where the niceness has a clear genetic payoff, but he drew the line at “high altruism,” acts of entirely disinterested kindness. For D’Souza, “there is no Darwinian rationale” for why you would give up your seat for an old lady on a bus, an act of nice-guyness that does nothing for your genes. And what about those who donate blood to strangers or sacrifice their lives for a worthy cause? D’Souza reasoned that these stirrings of conscience are best explained not by evolution or psychology but by “the voice of God within our souls.”

The evolutionary psychologist has a quick response to this: To say that a biological trait evolves for a purpose doesn’t mean that it always functions, in the here and now, for that purpose. Sexual arousal, for instance, presumably evolved because of its connection to making babies; but of course we can get aroused in all sorts of situations in which baby-making just isn’t an option — for instance, while looking at pornography. Similarly, our impulse to help others has likely evolved because of the reproductive benefit that it gives us in certain contexts — and it’s not a problem for this argument that some acts of niceness that people perform don’t provide this sort of benefit. (And for what it’s worth, giving up a bus seat for an old lady, although the motives might be psychologically pure, turns out to be a coldbloodedly smart move from a Darwinian standpoint, an easy way to show off yourself as an attractively good person.)

So far, so good. I think this next bit gives far too much credit to Alfred Russel Wallace and D’Souza, though, but don’t worry — he’ll eventually get around to showing how they’re wrong again.

The general argument that critics like Wallace and D’Souza put forward, however, still needs to be taken seriously. The morality of contemporary humans really does outstrip what evolution could possibly have endowed us with; moral actions are often of a sort that have no plausible relation to our reproductive success and don’t appear to be accidental byproducts of evolved adaptations. Many of us care about strangers in faraway lands, sometimes to the extent that we give up resources that could be used for our friends and family; many of us care about the fates of nonhuman animals, so much so that we deprive ourselves of pleasures like rib-eye steak and veal scaloppine. We possess abstract moral notions of equality and freedom for all; we see racism and sexism as evil; we reject slavery and genocide; we try to love our enemies. Of course, our actions typically fall short, often far short, of our moral principles, but these principles do shape, in a substantial way, the world that we live in. It makes sense then to marvel at the extent of our moral insight and to reject the notion that it can be explained in the language of natural selection. If this higher morality or higher altruism were found in babies, the case for divine creation would get just a bit stronger.

No, I disagree with the rationale here. It is not a problem for evolution at all to find that humans exhibit an excessive altruism. Chance plays a role; our ancestors did not necessarily get a choice of a fine-tuned altruism that works exclusively to the benefit of our kin — we may well have acquired a sloppy and indiscriminate innate tendency towards altruism because that’s all chance variation in a protein or two can give us. There’s no reason to suppose that a mutation could even exist that would enable us to feel empathy for cousins but completely abolish empathy by Americans for Lithuanians, for instance, or that is neatly coupled to kin recognition modules in the brain. It could be that a broad genetic predisposition to be nice to fellow human beings could have been good enough to favored by selection, even if its execution caused benefits to splash onto other individuals who did not contribute to the well-being of the possessor.

But that idea may be entirely moot, because there is some evidence that babies are born (or soon become) bigoted little bastards who do quickly cobble up a kind of biased preferential morality. Evolution has granted us a general “Be nice!” brain, and also that we acquire capacities that put up boundaries and foster a kind of primitive tribalism.

But it is not present in babies. In fact, our initial moral sense appears to be biased toward our own kind. There’s plenty of research showing that babies have within-group preferences: 3-month-olds prefer the faces of the race that is most familiar to them to those of other races; 11-month-olds prefer individuals who share their own taste in food and expect these individuals to be nicer than those with different tastes; 12-month-olds prefer to learn from someone who speaks their own language over someone who speaks a foreign language. And studies with young children have found that once they are segregated into different groups — even under the most arbitrary of schemes, like wearing different colored T-shirts — they eagerly favor their own groups in their attitudes and their actions.

That’s kind of cool, if horrifying. It also, though, points out that you can’t separate culture from biological predispositions. Babies can’t learn who their own kind is without some kind of socialization first, so part of this is all about learned identity. And also, we can understand why people become vegetarians as adults, or join the Peace Corps to help strangers in far away lands — it’s because human beings have a capacity for rational thought that they can use to override the more selfish, piggy biases of our infancy.

Again, no gods or spirits or souls are required to understand how any of this works.

Although, if they did a study in which babies were given crackers and the little Catholic babies all made the sign of the cross before eating them, while all the little Lutheran babies would crawl off to make coffee and babble about the weather, then I might reconsider whether we’re born religious. I don’t expect that result, though.

The Ubiquity of Exaptation

On Thursday, I gave a talk at the University of Minnesota at the request of the CASH group on a rather broad subject: evolution and development of the nervous system. That’s a rather big umbrella, and I had to narrow it down a lot. I say, a lot. The details of this subject are voluminous and complex, and this was a lecture to a general audience, so I couldn’t even assume a basic science background. So I had to think a bit.

I started the process of working up this talk by asking a basic question: how did something as complex as the nervous system form? That’s actually not a difficult problem — evolution excels at generating complexity — but I knew from experience that the first hurdle to overcome would be a common assumption, the idea that it was all the product of

purposeful processes, ranging from adaptationist compulsion to god’s own intent — that drive organisms to produce smarter creatures. I decided that what I wanted to make clear is that the origin of many fundamental traits of the nervous system is by way of chance and historical constraints, that the primitive utility of some of the things we take for granted in the physiology of the brain does not lie in anything even close to cognition. The roots of the nervous system are in surprisingly rocky ground for brains, and selection’s role has been to sculpt the gnarly, weird branches of chance into a graceful and useful shape.

So I put together a talk called The Ubiquity of Exaptation (2.7M pdf). The barebones presentation itself might not be very informative, I’m afraid, since it’s a lot of pictures and diagrams, so I’ll try to give a brief summary of the argument here.

The subtitle of the talk is “Nothing evolved for a purpose”, and I mean that most seriously. Evolved innovations find utility in promoting survival, and can be honed by selection, but they aren’t put there in the organism for a purpose. The rule in evolution is exaptation, the cooption of elements for use in new properties, with a following shift in function. It’s difficult to just explain, so I picked three examples from the evolution of the nervous system that I hoped would clarify the point. The three were 1) the electrical properties of the cell membrane, which are really a byproduct of mechanisms of maintaining salt balance; 2) synaptic signaling, which coopts cellular machinery that evolved for secretion and detecting external signals; and 3) pathfinding by neurons, the process that generates patterned connectivity between cells, and which uses the same mechanisms of cellular motility that we find in free-living single celled organisms.

  1. Excitability. This was the toughest of the three to explain, because I wasn’t talking to an audience of biophysicists. Our neurons (actually, all of our cells; even egg cells have interesting electrical properties) maintain an electrical potential, a voltage, across their membranes that you can measure with very tiny electrodes. This voltage undergoes short, sharp transient changes that produce action potentials, waves of current that move down the length of the cell. How do they do that? Where did this amazing electrical trick come from?

    The explanation lies in a common problem. Our cells have membranes that are permeable to water, and they also must contain a collection of proteins that are not present in the external environment. The presence of these functional solutes inside the cell should create an osmotic gradient, so that water would flow in constantly, trying to dilute the interior to be iso-osmotic (the same concentration) as the outside. Some cells have different ways to cope: one way is to build cell walls that retain the concentration in the interior with pressure; another is to have specialized organelles to constantly pump out water. Our cells use a clever and rather lazy scheme: they compensate for the high internal concentration of essential proteins by creating a high external concentration of some other substance, which is impermeant to the cell membrane. Water has the same concentration inside and outside, but there are different distributions of solutes inside and outside.

    What we use to generate these differential distributions are ionic salts, charged molecules. Positively charged sodium ions are high in concentration outside, while positively charged potassium ions and negatively (on average) charged proteins are high in concentration on the inside. Because these are charged ions, their distribution also coincidentally sets up a voltage difference. I confess, I did show the audience the Goldman equation, which is a little scary, but I reassured them that they didn’t have to calculate it — they just needed to understand that the arrangements of salts in cells and the extracellular space generates a voltage that is simply derived from the physical and chemical properties of the situation.

    We use variations in these voltages to send electrical signals down the length of our nerves, but they initially evolved as a mechanism to cope with maintaining our salt balance. We’re also used to thinking of these electrical abilities as being part of a complicated nervous apparatus, but initially, they found utility in single-celled organisms. As an example, I described the behavior of paramecia. The paramecium swims about by beating cilia, like little oars; the membrane of the paramecium maintains an electrical potential, and also contains selectively permeable ion channels that can be switched open or closed. When the organism bumps into an obstacle, the channels open, calcium rushes in as the potential changes, and the cilia all reverse the direction of their beating, making the paramecium tumble backwards. The electrical properties of your brain are also functionally useful to single-celled organisms.

    I concluded this section by trying to reassure everyone that their brain is something more than just a collection of paramecia swimming about. Although the general properties of the membrane are the same, evolution has also refined and expanded the capabilities of the neuronal membrane: there are many different kinds of ion channels, which we can see by their homology to one another are also products of evolution, and each one is specialized in unique ways to add flexibility to the behavioral repertoire of the cell. The origins of the electrical properties are a byproduct of salt homeostasis, but once that little bit of function is available, selection can amplify and hone the response of the system to get some remarkably sophisticated results.

  2. Synaptic signaling. Shuttling electrical signals across the membrane of a cell is one thing, but a nervous system is another: that requires that multiple cells send signals to one another. A wave of current flowing through a membrane in one cell needs to be transmitted to an adjacent cell, and the way we do that is through specialized connections called synapses. A chemical synapse is a specialized junction between two cells: on one side, the presynaptic side, a change in membrane voltage triggers the release of chemicals into the extracellular space; on the recieving side, the post-synaptic side, there are localized collections of receptors for that chemical signal, and when they bind the chemical (called a neurotransmitter), they cause changes in the membrane voltage on their side.

    Once again, the cell simply reuses machinery that evolved for other purposes to carry out these functions. Cells use a secretory apparatus all over the place; we package up hormones or enzymes or other chemicals into small balloons of membrane called vesicles, and we can export them to the outside of the cell by simply fusing the vesicle with the cell membrane. Lots our cells do this, not just neurons, and it’s also a common function in single celled organisms. Brewer’s yeast, for instance, contain significant pieces of the membrane-associated signaling complex, or MASC, althogh they of course don’t make true synapses, which requires two cells working together in a complementary fashion.

    I described the situation in Trichoplax, an extremely simple multicellular organism which only has four cell types. The Trichoplax genome has been sequenced, and found to contain a surprising number of the proteins used in synaptic signaling…but it doesn’t have a brain or any kind of nervous system, and none of its four cell types are neurons. What a mindless slug like Trichoplax uses these proteins for is secretion: it makes digestive enzymes, not neurotransmitters, and sprays them out onto the substrate to dissolve its food. Again, in more derived organisms with nervous systems, they have simply coopted this machinery to use in signaling between neurons.

    As usual, I had to make sure that nobody came away from this thinking their brain was a conglomeration of Trichoplax squirting digestive enzymes around. Yeast, choanoflagellates, and sponges have very primitive precursors to the synapse; we can look at the evolutionary history of the structure and see extensive refinement and elaboration. The modern vertebrate synapse is built from over 1500 different proteins — it’s grown and grown and grown from its simpler beginnings.

  3. Pathfinding. How do we make circuits of neurons? I’ve just explained how we can conduct electrical signals down single cells, and how pairs of cells can communicate with each other, but we also need to be able to connect up neurons in reliable and useful ways, making complex patterned arrangements of cells in the brain. We actually know a fair amount about how neurons in the developing nervous system do that.

    Young nerve cells form a structure called the growth cone, an amoeboid process that contains growing pieces of the cell skeleton (fibers made of proteins like tubulin and actin), enzymes that act as motor proteins, cytoplasm, and membrane. These structures move: veils of membrane called lamellopodia flutter about, antennae-like rods called filopodia extend and probe the environment, and the whole bloblike mass expands in particular directions by the bulk flow of cytoplasm. The cell body stays in place, usually, and it sends out this little engine of movement that trundles away, leaving an axon behind it.

    “Amoeboid” is the magic word. The growth cone uses the same cellular machinery single-celled organisms use for movement on a substrate. Once again, exaptation strikes, and the processes that amoebae use to move and find microorganismal prey are the same ones that the cells in your brain used to lay down pathways of circuitry in your brain.

    Furthermore, there is no grand blueprint of the brain anywhere in the system. Growing neurons are best thought of as simple cellular automata which contain a fairly simple set of rules that lead them to follow entirely local cues to a final destination. I described some of the work that David Bentley did years ago (and also some of my old grasshopper work) that showed that not only can the cues be identified in the environment, but that experimental ablation of those intermediate targets can produce cells that are very confused and make erroneous navigational decisions.

    We also contain a great many possible signals: long- and short-range cues, signals that attract or repel, and also signals that can change gene expression inside the neuron and change its behavior in even more complicated ways. It’s still at its core an elaboration of behaviors found in protists and even bacteria; we are looking at amazingly powerful emergent behaviors that arise from simple mechanisms.

And that was the story. Properties of the nervous system that are key to its function and that many of us naively regard as unique to neurons are actually expanded, elaborated, specialized versions of properties that are also present in organisms that lack brains, nervous systems, or even neurons…and that aren’t even multicellular. This is precisely what we’d expect from evolutionary origins, that everything would have its source in simpler precursors. Furthermore, it’s a mistake to try and shoehorn those precursors into necessarily filling the same functions as their descendants today. Cooption is the rule. Even the brains of which we are so proud are byblows of more fundamental functions, like homeostasis, feeding, and locomotion.

Get your geek on for Thursday

I’m going to be opening my mouth again on Thursday in Minneapolis — I’ll be giving a talk in MCB 3-120 on the Minneapolis campus at 7:30 on Thursday, 3 December. This will be open to the public, and it will also be an all-science talk, geared for a general audience. I’d say they were going to check your nerd credentials at the door, but just showing up means you’re already fully qualified.

The subject of the talk is my 3 big interests: a) evolution, or how we got here over multiple generations, b) development, or how we got here in a single generation, and c) the nervous system, the most complicated tissue we have. I intend to give a rough outline of how nervous tissue works, how it is assembled into a working brain, and how something so elaborate could have evolved. All in one hour. Wheee!

Afterwards, we’ll be joining the CASH gang for refreshments, somewhere. They haven’t told me yet where, but I know they’re fond of pizza.

Martin Chalfie: GFP and After

Chalfie is interested in sensory mechanotransduction—how are mechanical deformations of cells converted into chemical and electrical signals. Examples are touch, hearing, balance, and proprioception, and (hooray!) he references development: sidedness in mammals is defined by mechanical forces in early development. He studies this problem in C. elegans, in which 6 of 302 nerve cells detect touch. It’s easy to screen for mutants in touch pathways just by tickling animals and seeing if they move away. They’ve identified various genes, in particular a protein that’s involved in transducing touch into a cellular signal.

They’ve localized where this gene is expressed. Most of these techniques involved killing, fixing, and staining the animals. He was inspired by work of Shimomura, as described by Paul Brehm that showed that Aequorin + Ca++ + GFP produces light, and got in touch with Douglas Prasher, who was cloning GFP, and got to work making a probe that would allow him to visualize the expression of interesting genes. It was a gamble — no one knew if there were additional proteins required to turn the sequence into a glowing final product…but they discovered that they could get functional product in bacteria within a month.

They published a paper describing GFP as a new marker for gene expression, which Science disliked because of the simple title, and so they had to give it a cumbersome title for the reviewers, which got changed back for publication. They had a beautiful cover photo of a glowing neuron in the living animal.

Advantages of GFP: heritable, relatively non-invasive, small and monomeric, and visible in living tissues. Roger Tsien worked to improve the protein and produce variants that fluroesced at different wavelengths. There are currently at least 30,000 papers published that use fluroescent proteins, in all kinds of organisms, from bunnies to tobacco plants.

He showed some spectacular movies from Silverman-Gavrila of dividing cells with tubulin/GFP, and another of GFP/nuclear localization signal in which nuclei glowed as they condensed after division, and then disappeared during mitosis. Sanes and Lichtman’s brainbow work was shown. Also cute: he showed the opening sequence of the Hulk movie, which is illustrated with jellyfish fluorescence (he does not think the Hulk is a legitimate example of a human transgenic.)

Finally, he returned to his mechanoreceptor work and showed the transducing cells in the worm. One of the possibilities this opened up was visual screening for new mutants: either looking for missing or morphologically aberrant cells, or even more subtle things, like tagging expression of synaptic proteins so you can visually scan for changes in synaptic function or organization.

He had a number of questions he could address: how are mechanotransducers generated, how is touch transduced, what is the role of membrane lipids, can they identify other genes important in touch, and what turns off these genes?

They traced the genes involved in turning on the mec-3 gene; the pathway, it turned out, was also expressed in other cells, but they thought they identified other genes involved in selectively regulating touch sensitivity. One curious thing: the mec genes are transcribed in other cells that aren’t sensitive, but somehow are not translated.

They are searching for other touch genes. The touch screen misses some relevant genes because they have redundant alternatives, or are pleiotropic so other phenotypes (like lethality) obscure the effect. One technique is RNAi, and they made an interesting observation. Trying about 17000 RNAis, they discovered that 600 had interesting and specific effects, 1100 were lethal, and about 15,000 had no effect at all. The majority of genes are complete mysteries to us. They’ve developed some techniques to get selective incorporation of RNAis into just neurons of C. elegans, so they’re hoping to uncover more specific neural effects. One focus is on the integrin signaling pathway in the nervous system, which they’ve knocked out and found that it demolishes touch sensitivity — a new target!

They are now using a short-lived form of GFP that shuts down quickly, so they’ve got a sharper picture of temporal patterns of gene activity.

Chalfie’s summary:

  • Scientific progress is cumulative.

  • Students and post-docs are the lab innovators.

  • Basic research is essential. Who would have thought working on jellyfish would lead to such powerful tools?

  • All life should be studied; not just model organisms.

Chalfie is an excellent speaker and combined a lot of data with an engaging presentation.

Irwin Neher: Chemistry helps neuroscience: the use of caged compounds and indicator dyes for the study of neurotransmitter release

Ah, a solid science talk. It wasn’t bad, except that it was very basic—maybe if I were a real journalist instead of a fake journalist I would have appreciated it more, but as it was, it was a nice overview of some common ideas in neuroscience, with some discussion of pretty new tools on top.

He started with a little history to outline what we know, with Ramon Y Cajal showing that the brain is made up of network of neurons (which we now know to be approxiamately 1012 neurons large). He also predicted the direction of signal propagation, and was mostly right. Each neuron sends signals outwards through an axon, and receives input from thousands of other cells on its cell body and dendrites.

Signals move between neurons mostly by synaptic transmission, or the exocytosis of transmitter-loaded vesicles induced by changes in calcium concentration. That makes calcium a very interesting ion, and makes calcium concentration an extremely important parameter affecting physiological function, so we want to know more about it. Furthermore, it’s a parameter that is in constant flux, changing second by second in the cell. So how do we see an ion in real time or near real time?

The answer is to use fluorescent indicator dyes which are sensitive to changes in calcium concentration — these molecules fluoresce at different wavelenths or absorb light at different wavelengths depending on whether they are bound or not bound to calcium, making the concentration visible as changes in either the absorbed or emitted wavelength of light. There is a small battery of fluorescent compounds — Fura-2, fluo 3, indo-1 — that allow imaging of localized increases in calcium.

There’s another problem: resolution. Where the concentration of calcium matters most is in a tiny microdomain, a thin rind of the cytoplasm near the cell membrane called the cortex, which is where vesicles are lined up, ready to be triggered to fuse with the cell membrane by calcium, leading to the expulsion of their contents to the exterior. This microdomain is tiny, only 10-50nm thick, and is below the limit of resolution of your typical light microscope. If you’re interested in the calcium concentration at one thin, tiny spot, you’ve got a problem.

Most presynaptic terminals are very small and difficult to study; they can be visualized optically, but it’s hard to do simultaneous electrophysiology. One way Neher gets around this problem is to use unusually large synapses, the calyx of Held synapse, which is part of an auditory brainstem pathway. It’s an important pathway in sound localization, and the signals must be very precise. They have a pecial structure, a cup-like synapse that envelops the post-synaptic cell body — they’re spectacularly large, so large that one can insert recording electrodes both pre- and post-synaptically, and both compartments can be loaded with indicator dyes and caged compounds.

The question being addressed is the concentration of Ca2 at the microdomain of the cytoplasmic cortex, where vesicle fusion occurs. This is below the level of resolution of the light microscope, so just imaging a calcium indicator dye won’t work — they need an alternative solution. The one they came up with was to use caged molecules, in particular a reagent call Ca-DMN.

Caged molecules are cool, with one special property: when you flash UV light of just the right wavelength at them, they fall apart into a collection of inert (you hope) photoproducts, releasing the caged molecule, which is calcium in this case. So you can load up a cell with Ca-DMN, and then with one simple signal, you can trigger it to release all of its calcium, generating a uniform concentration at whatever level you desire across the entire cell. So instead of triggering an electrical potential in the synaptic terminal and asking what concentration of calcium appears at the vesicle fusion zone, they reversed the approach, generating a uniform calcium level and then asking how much transmitter was released, measured electrophysiologically at the post-synaptic cell. When they got a calcium level that produced an electrical signal mimicking the natural degree of transmitter release, they knew they’d found the right concentration.

Caged compounds don’t have to be just calcium ions: other useful probes are caged ATP, caged glutamate (a neurotransmitter), and even caged RNA. The power of the technique is that you can use light to manipulate the chemical composition of the cell at will, and observe how it responds. These are tools that can be used to modify cell states, to characterize excretory properties, or to generate extracellular signals, all with the relatively noninvasive probe of a brief focused light flash.

Reading this will affect your brain

Baroness Susan Greenfield has been spouting off some bad neuroscience, I’m afraid. She’s on an anti-social-networking-software, anti-computer-games, anti-computer crusade that sounds a bit familiar — it’s just like the anti-TV tirades I’ve heard for 40-some years — and a little bit new — computers are bad because they are “changing the workings of the brain“. Ooooh.

But to put that in perspective, the brain is a plastic organ that is supposed to rewire itself in response to experience. It’s what they do. The alternative is to have a fixed reaction pattern that doesn’t improve itself, which would be far worse. Greenfield is throwing around neuroscientific jargon to scare people.

So yes, using computers all the time and chatting in the comments sections of weird web sites will modify the circuitry of the brain and have consequences that will affect the way you think. Maybe I should put a disclaimer on the text boxes on this site. However, there are events that will scramble your brains even more: for example, falling in love. I don’t want to imagine the frantic rewiring that has to go on inside your head in response to that, or the way it can change the way you see the entire rest of the world, for good or bad, for the whole of your life.

Or, for an even more sweeping event that had distinct evolutionary consequences, look at the effect of changing from a hunter-gatherer mode of existence, to an agrarian/urban and modern way of life. We get less exercise because of that, suffer from near-sightedness, increased the incidence of infectious disease, and warped our whole pattern of activity in radical ways. Not only do neural pathways have to develop in different ways to cope with different environments, but there was almost certainly selection for urban-compatible brains—people have died of the effects of that shift. Will Baroness Greenfield give up her book-writin’, lecturin’ ways to fire-harden a pointy stick, don a burlap bag, and dedicate her life to hunting rabbits?

Embryonic similarities in the structure of vertebrate brains

i-e88a953e59c2ce6c5e2ac4568c7f0c36-rb.png

I’ve been doing it wrong. I was looking over creationist responses to my arguments that Haeckel’s embryos are being misused by the ID cretins, and I realized something: they don’t give a damn about Haeckel. They don’t know a thing about the history of embryology. They are utterly ignorant of modern developmental biology. Let me reduce it down for you, showing you the logic of science and creationism in the order they developed.

Here’s how the scientific and creationist thought about the embryological evidence evolves:

i-0fbb95c437feb7bb89110acb6f8e6326-brcorner.gifScientific thinking

An observation: vertebrate embryos show striking resemblances to one another.

An explanation: the similarities are a consequence of shared ancestry.

Ongoing confirmation: Examine more embryos and look more deeply at the molecules involved.

i-c1503e12cd6cf804a7bbd33bdcee007f-tiny_gumby_trans.gif

Creationist thinking

A premise: all life was created by a designer.

An implication: vertebrate embryos do not share a common ancestor.

A conclusion: therefore, vertebrate embryos do not show striking resemblances to one another.



[Read more…]