When I read the latest dreck from the “Walter Bradley Center for Natural and Artificial Intelligence”, all I could think was: I did warn you.Of course, it didn’t really take that much cleverness. The “Center” is a project of the Discovery Institute, a think tank so committed to dissembling about evolution that it’s often been called the “Dishonesty Institute”. And, as I pointed out, the folks working at the “Center” aren’t exactly luminaries in the area they purport to critique.
This latest column is by Michael Egnor, a surgeon whose arrogance (as we’ve seen many times before) is only exceeded by his ignorance. Despite knowing nothing about computer science, Egnor tries to explain what machine learning is. The results are laughable.
Egnor starts by making an analogy between a book and a computer. He says a book “is a tool we use to store and retrieve information, analogous in that respect to a computer”. But this comparison misses the single most essential feature of a computer: it doesn’t just store and retrieve information, it processes it. A book made of paper typically does not; the words are the same each time you look at it.
Egnor goes on to construct an analogy where the book’s binding cracks preferentially where people use it. But to be a computer you need more kinds of processing capabilities than cracked bindings. Not just any processing; there’s a reason why machines like the HP-35, despite their ability to do trig functions and exponentials, were called “calculators” and not “computers”. To be genuinely considered a “computer”, a machine should be able to carry out basic operations such as comparisons and conditional branching. And some would say that a computer isn’t a real computer until it can simulate a Turing machine. A book with a cracked binding isn’t even close.
Egnor goes on to elaborate on his confusion. “The paper, the glue, and the ink are the book’s hardware. The information in the book is the software.” Egnor clearly doesn’t understand computers! Software specifies actions to be taken by the computer, as a list of commands. But a book doesn’t typically specify any actions, and if it does, those actions are not carried out by the “paper” or “glue” or “ink”. If anything carries out those actions, it is the reader of the book. So the book’s hardware is actually the person reading the book. Egnor’s analogy is all wrong.
Egnor claims that computers “don’t have minds, and only things with minds can learn”. But he doesn’t define what he means by “mind” or “learn”, so we can’t evaluate whether this is true. Most people who actually work in machine learning would dispute his claim. And Egnor contradicts himself when he claims that machine learning programs “are such that repeated use reinforces certain outcomes and suppresses other outcomes”, but that nevertheless this isn’t “learning”. Human learning proceeds precisely by this kind of process, as we know from neurobiology.
Finally, Egnor claims that “it is man, and only man, who learns”. This will be news to the thousands of researchers who study learning in animals, and have done so for decades.
When a center is started by people with a religious axe to grind, and staffed by people who know little about the area they purport to study, you’re guaranteed to get results like this. Computer scientists have a term for this already: GIGO.
Marcus Ranum says
Despite knowing nothing about computer science, Egnor tries to explain what machine learning is
Good god. As someone who’s implemented machine learning systems several times since 1999, I have to say that he couldn’t tell his ass from a hole in the ground even if he had a well-trained bayesian classifier to help. He’s trying to turn a book into a variant of the “chinese room” problem, and he doesn’t realize that the answer to all questions about “what is intelligence?” are pretty much “so what, it works.”
Most people who actually work in machine learning would dispute his claim
I suppose there might be someone who doesn’t, so you have to say “most” as a safety-hedge.