Dave Thomas explains Genetic Algorithms and demonstrates that, as usual, the Intelligent Design bigwigs don’t have any idea what they’re talking about.
I don’t know. Sure this is better than the “weasel” demo, but it still isn’t very much like real evolution. There is still a human defined fitness function “short is good”. Real natural selection simply is based on different levels of reproductive success. Could this be achieved by being short? Sure, but other “strategies” (such as being long, fat, thin. etc.) also might work – that’s why there’s diversity in life. Truly interesting simulations of evolution need to get rid of defined “fitness functions” and should let them simply develop fom the environment.
Caledoniansays
“Sure this is better than the “weasel” demo, but it still isn’t very much like real evolution”
What the…?!
What would it take to be like ‘real evolution’ in your mind? Solving hydrated protein structures in realtime? There’s change in trait distribution over time — that’s evolution, period. That the change takes place because of selection by the environment of certain kinds of traits makes it a model of natural selection.
There have already been many simulations that don’t have explicitly defined fitness functions, and they’ve been very successful. Look for a Scientific American Frontiers segment entitled ‘Robot Independence’ from 12/17/2000 for a particularly lovely example.
What would it take to be like ‘real evolution’ in your mind? Solving hydrated protein structures in realtime?
No, it wouldn’t have to be that detailed biochemically , but it should involve competing for resources by any strategy — predation, parasitism, mutualism, etc. There are indeed systems (like Tierra and Avida) that are first stabs at this sort of thing, but they still have too much defined by the programmers. Still, they are more interesting than simply finding the shortest path according to a fitness function that rewardness “shortness”.
Caledoniansays
Still, they are more interesting than simply finding the shortest path according to a fitness function that rewardness “shortness”.
As opposed to a fitness function that finds configurations that are best at replicating themselves according to a fitness function that rewards replication?
The applications of evolutionary algorithms to actual design tasks inevitably involves finding fitness functions that describe our desires and goals, then letting evolution proceed. How is that not interesting?
There’s some work in the evolutionary game theory modelling community by Peter Danielson and others that does sort of illustrate competing for resources by different strategies. I think some of it might be a little oversimplistic for other reasons, but it is a start.
I had to laugh. The random quote that appeared along with this entry when I surfed in this morning was the one from Ken Miller on “Firing Line” about the theoretical deficiencies of those who argue against evolution. How apropos was that?
I have frequently noticed that the random quote generator seems more “intelligent” than the word “random” would imply. Design? “You be the judge!” LOL
Uncle Don
T_U_Tsays
Sure this is better than the “weasel” demo, but it still isn’t very much like real evolution. There is still a human defined fitness function “short is good”.
.
If you make fitness implicit in the enviromnet, instead of explicit, it will surely make your evolutionary simulation more fancy, and the results more suprising, but the evolution of the system will be exactly the same. It is similar to defining explicitly X = 4 vs defining it implicitly, like 3X + 4 = 4X.
As opposed to a fitness function that finds configurations that are best at replicating themselves according to a fitness function that rewards replication?
No, that’s circular. Evolutionary simulations use fitness functions to *determine* the probability of reproduction of a “genome” in the next generation.
The applications of evolutionary algorithms to actual design tasks inevitably involves finding fitness functions that describe our desires and goals, then letting evolution proceed. How is that not interesting?
Okay, perhaps I should rephrase my opinion to “not interesting from the standpoint of furthering our knowledge of biological evolution”. Such genetic algorithms might be indeed practical tools (although most textbooks on machine learning tend to favor Bayesian models over such biologically inspired algorithms such as neural nets and genetic algorithms).
. If you make fitness implicit in the enviromnet, instead of explicit, it will surely make your evolutionary simulation more fancy, and the results more suprising, but the evolution of the system will be exactly the same. It is similar to defining explicitly X = 4 vs defining it implicitly, like 3X + 4 = 4X.
No, I don’t think it is the same. For example, “Tierra” allows parasites to evolve. Traditional GAs don’t — how can the “evolution of the system” be the same in environments with and without parasites?
T_U_Tsays
For example, “Tierra” allows parasites to evolve. Traditional GAs don’t
That is because explicit fitness is usually defined as independent of other organisms in the system and implicit usually as dependent of. But this is not inevitable. You can also write implicit functions that ignore all other virtual critters and explicit functions that take them in account
Paul W.says
Jonathan,
I generally agree that typical genetic algorithms are not enough like natural evolution, in some sense, but there are several interacting dimensions that make it hard to interpret results, even without throwing in wildly nonlinear interactions like predation, parasitism, and mutualism.
One basic problem with simple GA’s is that they’re very greedy search algorithms—they tend to reward the first barely-viable or significantly-better solution and it outcompetes everything else after a relatively few generations. (Think MS-DOS and Windows.) There’s one population in an “everybody sleeps with everybody love heap” and all the progeny competing head-to-head.
That means that a genotype with a novel strategy is very likely to be weeded out before it has refined that strategy enough to be competitive; it’s like expecting babies to compete for food with teenagers.
If you look at real environments, there’s really a whole bunch of semi-isolated sub-environments with different fitness functions, so that novel solutions often don’t have to compete head-to-head with established winners. In a coral reef, for example, there are many different-sized spaces for small, medium, and large animals to occupy, without having to compete with much-larger animals, as well as various gradients. (Light, water velocity, nutrients, etc.)
Even without predation/parasitism/mutualism, and only looking at peer competition, this is important for making real evolution vastly less greedy than typical GA’s. Small, maneuverable species can thrive in networks of small spaces, protected from big fast ones which can’t get in at all, or can’t maneuver well enough. At some later point, these animals may evolve to be bigger, moving into larger and larger spaces and competing head-to-head with big animals that evolved by a different route. (Or the successful big animals may eventuallly get dwarfed and invade the smaller animals’ environments.)
It seems to me that there are effectively many “dimensions” like this that more-or-less segregate “farm teams,” acting as incubators for novel strategies which may eventually break out and compete in the larger arena. Even in environments that are less geographically or geometrically varied than a coral reef, there are lots of differences between places, e.g., simple gradients effectively making qualitatively different regimes. (E.g., in water, floating at the surface vs. resting on the bottom vs. swimming around in the middle.)
This makes me think that GA’s ought to typically be more like some alife environments, with semi-isolated subpopulations subject to different fitness functions that differ in several dimensions. So, for example, if you’re trying to evolve bots that are both fast and maneuverable some could thrive in an environment where speed is weighted more than maneuverability, while very different ones thrive in an environment where that bias is reversed.
(Imagine, say, a 5×5 grid of sub-environments, where one dimension is “speed” and the other is “agility”. Everybody starts in the slow/clumsy corner, but there are different routes to the fast/agile corner. Variants that get fast first would move along one dimension, and variants that get agile first would move along the other, only competing when they evolve along both dimensions and meet somewhere in the middle. Or maybe a species maintaining a reasonable balance would get there first, evolving along the diagonal without falling into the dead ends along the edges…)
I seem to recall reading (10 years ago?) about some a-life simulations with networks of semi-isolated subenvironments, but all with the same fitness function. (Simply deferring competition had a big effect on diversity and resulted in some neat graphics showing different color-coded variants in different regions, occasionally breaking out and sweeping through each others’ niches.) If nothing else, it was a vivid demonstration of how greedy a normal GA is by comparison, and demonstrated some speciation effects. Still, it seemed very greedy to me, because a fundamental problem is the uniformity of the fitness function. (I don’t know what’s been done along those lines since.)
One thing that struck me back then was that predation, parasitism and mutualism may often serve mainly to vary the fitness function over time, so that different evolutionary pathways are explored. They introduce big perturbations into the search and break a lot of dead-end solutions. Unfortunately, they also introduce all kinds of weird nonlinear effects, making the results difficult to understand.
I think it would be interesting to avoid those complexities and introduce simple variations in the fitness functions first, e.g., multiple gradients. (Maybe somebody’s done that by now… I haven’t kept up with GA’s or AL.)
I think this all has something to do with Dembski’s gripes about targets, but I’m not sure exactly what. He’s sorta right that fixed fitness functions pre-define a single gradient tilted toward right answers. This makes them efficient in some sense—they don’t waste a lot of effort running off in other directions—but they often work in spite of it making them too greedy. Natural evolution is more robust because rich environments maintain diversity, so more novel solutions will be found.
I think there are some deep issues there about what’s a realistic fitness function and what constitutes “rigging” one, especially with respect to the evolution of modules that will be useful in the long term. Light-sensing, for example, has a bunch of uses in a bunch of different environments. (Even without developing imaging vision, it can be useful in shade-seeking to avoid UV damage, in detecting whether a big predator is swimming over you, tracking what time of day it is to predict later temperature changes, etc.)
The utility of light sensing for all of these non-obvious purposes serves as a great big hint to a big dumb search algorithm that light can convey useful information. The dumb search algorithm doesn’t have to know that imaging vision is possible or desirable, because light is so good at conveying information that various cheesy hacks are simple and useful in many environments—even a very dumb algorithm will latch onto light-sensing.
But it seems to me the only rigging there, if any, is in the nature of 3D space and light, and randomly variations in environments creating various gradients and networks of sub-environments. The rest follows a la Darwin.
I guess that’s why God said “let there be light”—so that even a big dumb algorithm could mostly hill-climb very circuitiously up to us, and make Darwin right. (There is no God but God, and Darwin is clearly his greatest prophet.)
lt.kizhesays
I agree that GAs (specifically, Thomas’ example) don’t model evolution well, in that they focus narrowly and abstractly on one particular (albeit important) mechanism, and ignore the dynamics of a changing environment. However, that’s not really what the current argument with IDists is about: the argument is precisely about abstractions like whether evolution can add any information, or achieve anything we might call “design”. Basically, it’s just a new costume for the old Creationist claim that No Good Thing Can Come From Random Processes (gosh — IDists recycle old Creationist arguments? Who’d’ve thunkit?). And even simple, abstract GA examples like this one refute that claim handily (as indeed, does the entire field of stochastic optimization).
NatureSelectedMesays
I just want to say what a great insightful comment by Paul W. If this were Slashdot, it would be modded a 5. The only error I can see is when you said of Dembski He’s sorta right. Tsk, Tsk. That can never be. We’ll have to disregard your whole comment. :)
thwaitesays
Dave Thomas’ explanation of GA is excellent (and I’ve seen other attempts – it’s not easy). Thanks PZ for highlighting it.
In addition to such formal studies, applications of GA-like discovery procedures to real-world problems like drug and other molecule design is not only closer to nature and less prone to charges of “predetermined results”, there’s probably some money to be made. A google on ‘evolutionary chemistry’ includes hits like this:
www . cropsolution . com/technology.html
>Cropsolution utilizes two primary technologies, Targeted Biologyâ¢
and Evolutionary Chemistryâ¢.
…
>Evolutionary Chemistry⢠projects at Cropsolution include a NSF-SBIR funded proof of concept experiment aimed at extending the capability of RNA catalysts to synthesize small molecules. This proof of concept was successful, demonstrating the ability of the technology to evolve RNA catalysts for the synthesis of sulfonylureas, an important class of herbicides and pharmaceuticals.
(I have no affiliation with cropSolution, and note their web page hasn’t been updated since 2004 or so.)
Caledoniansays
No, that’s circular. Evolutionary simulations use fitness functions to *determine* the probability of reproduction of a “genome” in the next generation.
Of course it’s circular! It doesn’t make any bit of difference whether the fitness function is explicitly defined, or arises as a consequence of a more complex environment. It is exactly the same as defining a set of algorithms that define the environment and letting *that* define the fitness function.
Caledoniansays
He’s sorta right that fixed fitness functions pre-define a single gradient tilted toward right answers.
Not only is Paul W. wrong for suggesting that Dembski is correct about something (tee hee!), he’s also more generally (and genuinely) wrong. There’s nothing about predefined fitness functions that inevitably leads to single gradients tilted towards particular answers.
We use predefined functions for demonstrations for several reasons. They accurately represent examples of genuinely evolutionary change, for one, and they’re great ways of showing that evolutionary processes can result in “specified information”. But more importantly, there simply is no difference in the results. It doesn’t matter if I generate a random environment that has a fitness space of a certain shape, or dictate the fitness space of the simulation by fiat. The evolutionary responses is the same.
Halfjacksays
GAs aren’t generally used to answer questions about evolution or natural selection — they are an attempt to harness the process for actual problem solving. Of course reversing the purpose causes problems. Fitness functions are there as an improvement on real world evolution so that the system converges on a useful solution in the experimenter’s life time.
John Koza’s work at Stanford has found some startling solutions to analog circuit design with genertic programs. I think he may have stumbled on some legal problems too, given that his software can generate patent infringements without direction from a human. Is the patent valid if a machine can crunch it out?
Not only is Paul W. wrong for suggesting that Dembski is correct about something (tee hee!), he’s also more generally (and genuinely) wrong. There’s nothing about predefined fitness functions that inevitably leads to single gradients tilted towards particular answers.
That’s silly. Of course predefined fitness functions are tilted towards particular answers. Given enough time (and sufficient sources of variation, such as mutation and crossovers, to get over local optima), the “optimal” solution will be found. There is no “optimal” solution in real biology.
GAs aren’t generally used to answer questions about evolution or natural selection — they are an attempt to harness the process for actual problem solving
True, but even then they are not particularly powerful. Even bioinformaticians generally prefer HMMs and support vector machines over GAs when wanting to solve real world machine learning problems (like gene finding). Early work on gene finding did try GAs, but modern genefinders such as GENSCAN are almost exclusively HMM based.
John Koza’s work at Stanford has found some startling solutions to analog circuit design with genertic programs
I have to admit Koza’s work is more interesting than GAs. Genetic Programing actually generates programs in a Turing-complete language such as LISP. While he still applies predefined fitness functions, he is doing far more than optimizing parameters as in GAs.
Caledoniansays
Of course predefined fitness functions are tilted towards particular answers.
You’re leaving out the most important part of my objection — the part that, once removed, makes my objection invalid. Curious, that. A more suspicious person might attribute that to a hostile and dishonest frame of mind on your part. But as I’m all smiles and sunshine, I’m sure it was just a misunderstanding.
Predefined fitness functions do not inevitably lead to single gradients tilted towards particular answers. Obviously all interesting fitness functions will have non-trivial optima, but those optima do not need to be known explicitly (although of course they are defined by the function whether we know what they are or not. Functions can lead to complex and subtle fitness spaces in which there isn’t necessarily any single best answer.
There are indeed optimal solutions in real biology. It’s not a coincidence that sharks, dolphins, and whales all have extremely similar body designs. Convergent evolution is very real, and though we don’t yet know how similar two organisms must be in order for their evolutionary development to take similar paths, we know that it happens often.
It’s also worth pointing out that, although many fitness spaces in real biology change as the organisms that inhabit them change, this is not always the case. Finding effective ways to move through water, for example, depends almost entirely on the physical properties of water. Those do not change when ecosystems change. Even then, there is no “best” solution because organisms have different needs for motion. When we compare organisms that have similar needs, we find strikingly similar propulsion methods in many cases. There are indeed “best” local solutions, given organisms’ structures and substances.
Of course it’s circular! It doesn’t make any bit of difference whether the fitness function is explicitly defined, or arises as a consequence of a more complex environment. It is exactly the same as defining a set of algorithms that define the environment and letting *that* define the fitness function.
1. This could only be true even in theory for cases in which there is only a single fitness function for all organisms.
2. In practice, you’d have to know a lot about the effect of the environment to come up with the same fitness function as the simulation would. What’s truly interesting about biological evolution is that, as far as we can tell, nobody with knowledge is defining fitness functions; it all emerges from the environment itself.
Caledoniansays
1. Those are precisely the cases we are most interested in — those evolutionary algorithms are the ones we can use to actually produce results. Cases where fitness is determined by referencing the states of other, evolving entities might be fun for a game, but they’d have little practical purpose.
2. Not if it was a very simple environment. Such as, for example, that found in a primitive demonstration simulation designed to illustrate some basic concepts.
Predefined fitness functions do not inevitably lead to single gradients tilted towards particular answers. Obviously all interesting fitness functions will have non-trivial optima, but those optima do not need to be known explicitly (although of course they are defined by the function whether we know what they are or not. Functions can lead to complex and subtle fitness spaces in which there isn’t necessarily any single best answer.
If you define “shortest path” as your fitness function, you *will* find the shortest path (unless your GA is poorly implemented, or you don’t let the program run long enough). The fact that there may be two (or more) equally short paths doesn’t really change the fact.
I speak here with two hats: one as a evolutionary genomicist who is interested in how genomic features evolved, and secondly as a bioinformatician who is interested in any machine learning technique that can help me get more out of genomic data. While I was fascinated by GAs when I first learned about them 20 years ago in high school after reading a column of A.K. Dewdney’s in _Scientific American_, as a practising scientist I really don’t see how they are particularly useful in either case. They *are* a good introduction to the idea of natural selection for students, I admit.
Caledoniansays
If you define “shortest path” as your fitness function, you *will* find the shortest path (unless your GA is poorly implemented, or you don’t let the program run long enough). The fact that there may be two (or more) equally short paths doesn’t really change the fact.
Well, yeah. That’s the whole point of using evolutionary algorithms, isn’t it.
While I was fascinated by GAs when I first learned about them 20 years ago in high school after reading a column of A.K. Dewdney’s in _Scientific American_, as a practising scientist I really don’t see how they are particularly useful in either case.
How interesting. While I am not a member of either field, I know quite a few electrical engineers and computer scientists who are very excited about using GAs to produce circuitry that is both efficient and fault-tolerant, something that human engineering simply isn’t good at. If you don’t see much application for improved computational systems in your fields, perhaps you ought to broaden your field of view a bit.
Torbjörn Larssonsays
“There is still a human defined fitness function “short is good”.”
First a nitpick. There was also “not linking to stable nodes is bad.”
Essentially Thomas is doing a more realistic experiment than the WEASEL. It may be a loss to explain experimental setups to a creationist, but it is completely justified as such. Hopefully sane but uneducated minds will have easier to grasp that.
“They *are* a good introduction to the idea of natural selection for students, I admit.”
I think that is the only goal here. And these “students” doesn’t want to learn. Thomas article will be a good pointer to the usual creocrap about GAs. Until someone realises your suggestion, which is even better.
Unfortunately, as in evolution, the perfect solution to promote enlightenment and defeat politic agendas will never be achieved. Evolution simulators will probably always be attacked. Evolving fitness function programs will probably be quite complicated and have long codes, which are other obstacles.
“Even bioinformaticians generally prefer HMMs and support vector machines over GAs when wanting to solve real world machine learning problems”
Thanks for a valuable pointer!
Paul W.says
Caledonian,
I didn’t say or mean to imply that simple GA’s are biased toward particular answers in the sense that Dembski needs for his argument, which I agree is a load of crap.
I also agree that there’s nothing wrong with predefined fitness functions, unless you’re trying to demonstrate something about the emergence of fitness functions. (That’s why I proposed an algorithm that uses (a set of) predefined fitness functions.)
My criticism of simple GA’s is about a domain-independent bias in search structure. This inevitably biases things toward some set of particular answers and away from others, but that’s a far cry from what Dembski needs for his argument that it’s just sneaking the solution into the fitness function.
I do understand that even despite their greediness, even simple GA’s can be useful and interesting; contrary to what Dembski says, they can produce true novelty. And of course picking on simple GA’s is a straw man in terms of showing what natural evolution is capable of; as I explained, it’s not nearly as greedy at bigger scales of space and time, and is therefore capable of finding tons novelties that simple GA’s are not.
Sorry if my phrasing made it sound like I was agreeing with Dembski more than I was; I expected it to be obvious that I don’t really buy any of Dembski’s arguments. I think that if you fixed them up a bit, they’d lead to the opposite conclusions. (Hence my ironic but almost-serious comment to the effect that “Darwin was right, therefore God exists.”)
Caledoniansays
I didn’t say or mean to imply that simple GA’s are biased toward particular answers in the sense that Dembski needs for his argument, which I agree is a load of crap.
True. You said that GA’s (nothing about ‘simple’) with predefined fitness functions have single gradients tilted towards particular answers.
That’s also a load of crap. Instead of defending the statement, you instead deny having said a superfically similar but almost completely different statement. A more suspicious person might view that as suggestive of a dishonest and hostile mindset; fortunately I’m all smiles and sunshine, so I presume it was merely a misunderstanding on your part.
This inevitably biases things toward some set of particular answers and away from others,
Well, yes — that’s what fitness functions do. No matter how broad the definition of fitness is, and regardless of whether there are many or few strategies that produce it, a fitness function will incline populations towards some strategies and not others. This is a trivial point that quite obviously proceeds from rudimentary facts. The fact that you’ve made it into an argument is… peculiar.
Given a conflict between what you say you mean and the meaning of what you said, I am not inclined to accept your assertions at face value.
simonhsays
Paul W: If you’re interested in how some people avoid the exploitation over exploration, do a google for “island genetic algorithm”, there you have multiple, weakly interacting populations. It’s probably like the A-life thing you were thinking of.
Caledonian: I don’t know if you can really use the word “optimal” in relation to biology – all you can really say is that these things are the most sufficient. I think this is one of the big differences between natural and artifical evolution – in artificial evolution our fitness function pushes towards optimal, but nature only seeks the sufficent.
Paul W.says
Caledonian, Caledonian, Caledonian. Sigh.
What I said was “He’s sorta right that fixed fitness functions pre-define a single gradient tilted toward right answers.”
Here’s the whole paragraph:
I think this all has something to do with Dembski’s gripes about targets, but I’m not sure exactly what. He’s sorta right that fixed fitness functions pre-define a single gradient tilted toward right answers. This makes them efficient in some sense—they don’t waste a lot of effort running off in other directions—but they often work in spite of it making them too greedy. Natural evolution is more robust because rich environments maintain diversity, so more novel solutions will be found.
I think it was clear to everyone else here what I did and didn’t mean by that, given the preceding and subsequent stuff about greediness of search, etc., which you ignore.
Given what I proposed, I am obviously not bothered by fixed, i.e., pre-coded, algorithmic fitness functions as Dembski is.
I am bothered by “fixed” fitness functions in a different, higher-level sense—what you might call “effective fitness functions,” which emerge from things like modeling predation or mutualism in some algorithms, or simply by the availability of alternate paths through a fixed space of sub-environments, as in my proposal.
There is no contradiction there. There really are ambiguities in words like “fixed” and “fitness function,” which you have to resolve by understanding the other stuff I said before and after what you latched onto. I said a lot of stuff, precisely so that people could clearly understand how to resolve those ambiguities and see what I actually meant, which is different from what Dembski means on several important counts.
In particular:
1. Use of a “fixed” (pre-coded, explicit) algorithmic fitness function does not in itself introduce an illegitimate bias or constitute “rigging” in any sense that precludes GA’s producing actual novelty.
2. Use of a “fixed” (uniform across time and environments) effective fitness function does introduce an important bias, but not of the kind Dembski needs for his argument.
3. That bias makes simple GA’s rather lame in a significant way. (I may have omitted the qualifier “simple” or “typical” in one case, which you latched onto, but it should be dead obvious what I meant despite that lapse. I described a GA without that bias, and explicitly said that GA’s “ought to be” different, so obviously I was referring to most current GA’s, or typical GA’s, or simple GA’s, or something, and obviously not referring to all possible GA’s. Score a niggling point for you, but duh!, Cal.)
Note that after the offending sentence in the paragraph above, I proceed to talk about efficiency vs. breadth of search. That’s my concern, get it? The kind of bias I’m talking about is domain-independent, though of course it affects which particular solutions are likely to be found given any particular problem posed. The latter is only trivially true, I agree, but it’s true. I acknowledge that trivial truth and its triviality, even if Dembski does not.
Now try to understand my use of the terms “uniform gradient” and a “tilt” toward “right answers.” Let’s invert our fitness function and think of “gradient descent” rather than “hill-climing.” They’re equivalent and the former is more useful for understanding what I mean by a “single gradient” or “tilt.”
Suppose you have one of those little maze puzzles where you’re trying to roll a steel ball into a hole. If you simply tilt it toward the hole, is the ball likely to go into the hole? No, because it’s a maze. You have to tilt it first one way, then another, over and over, to get the ball to navigate the maze.
Now let’s make the analogy closer. Instead of one steel ball, you have a bunch. (I.e., a population of potential solutions.) You tilt the maze toward the hole. Do any of the balls go in the hole? Probably not. You shake the maze, generating random variants. Do any of the balls go in the hole? Probably not.
Now try something else. Tilt the maze one way and shake it. Then tilt it another way and shake it. Keep doing that. Do any of the balls go in the hole? Probably, because they spread out in different dimensions and find dissimilar pathways, some of which are likely to get around nontrivial obstacles that using a single-direction tilt toward the hole could not.
My objection was never to having a pre-defined critierion for right answers, or to tilting the puzzle, or to tilting it toward the hole sometimes. Dembski may have problems with that, but I never have.
My objection is to tilting the puzzle consistently toward the hole and either
(1) expecting that the ball will go in the hole, i.e., overestimating simple GA’s search abilities, or
(2) thinking that this is representative of how natural evolution is supposed to work, and reveals a profound lameness in evolution by natural selection.
I think these things were obvious to everyone but you from my first posting. If you’d read and comprehended the stuff before and after the offending sentence, you’d realize that for Dembski’s argument, the key words in the sentence are “fixed” in the sense of pre-coded, and a “tilt” toward “right answers” in some sense that’s illegimate. I think he’s wrong about all that.
My concerns are quite different. I am only concerned with the problem that fixed effective fitness functions introduce a single tilt directly toward a certain criterion of fitness (and indirectly toward bad local optima in the fitness landscapes). This isn’t illegitimate—it’s just lame. The implications of that are very different from what Dembski thinks, and funnier. (If even algorithms as lame as simple GA’s work as well as they in fact do, evolution must be a whole lot easier than Dembski wants to admit.)
Part of the problem here is that I was being a little bit funny, at first overstating the extent to which I think Dembski is “sorta right,” and proceeding to show that he’s almost completely wrong about everything. He’s only partly right about a couple of things, and only in a trivial sense which he conflates with vastly more interesting senses, so he’s wrong, wrong, wrong.
But please notice that even if we take the above-quoted paragraph out of context, it starts with three—count ’em, three—weakening qualifiers—that what I’m concerned with has “something to do with” what Dembski’s concerned with, but “I’m not exactly sure what,” and he’s (only) “sorta right that…”
Unfortunately, such subtleties tend to get lost when people get paranoid, and I’m beginning to think you are a bit paranoid. You’re intolerant of ambiguity, even when disambiguating information is right there, and gets repeated. You are too prone to interpreting things as stupidity or dishonesty, when a better explanation is available—e.g., that you’ve resolved an ambiguity incorrectly, or that somebody made an understandable and non-contemptible mistake, or misspoke, or something like that.
If you want to suggest that I’m stupid, ignorant, irrational, or dishonest—or state such things outright, as you’ve done before, you go right ahead.
But if you do, I’ll say this in all seriousness: Caledonian, I think you are a little bit paranoid; seek professional help. I’m tired of this.
Now if anybody besides Caledonian has a problem with what I wrote, I’d be happy to address their concerns.
Caledoniansays
Translation: what I meant to say was perfectly coherent, so you should ignore what I said and just assume my statements make perfect sense.
Jonathan Badger says
I don’t know. Sure this is better than the “weasel” demo, but it still isn’t very much like real evolution. There is still a human defined fitness function “short is good”. Real natural selection simply is based on different levels of reproductive success. Could this be achieved by being short? Sure, but other “strategies” (such as being long, fat, thin. etc.) also might work – that’s why there’s diversity in life. Truly interesting simulations of evolution need to get rid of defined “fitness functions” and should let them simply develop fom the environment.
Caledonian says
“Sure this is better than the “weasel” demo, but it still isn’t very much like real evolution”
What the…?!
What would it take to be like ‘real evolution’ in your mind? Solving hydrated protein structures in realtime? There’s change in trait distribution over time — that’s evolution, period. That the change takes place because of selection by the environment of certain kinds of traits makes it a model of natural selection.
There have already been many simulations that don’t have explicitly defined fitness functions, and they’ve been very successful. Look for a Scientific American Frontiers segment entitled ‘Robot Independence’ from 12/17/2000 for a particularly lovely example.
Jonathan Badger says
No, it wouldn’t have to be that detailed biochemically , but it should involve competing for resources by any strategy — predation, parasitism, mutualism, etc. There are indeed systems (like Tierra and Avida) that are first stabs at this sort of thing, but they still have too much defined by the programmers. Still, they are more interesting than simply finding the shortest path according to a fitness function that rewardness “shortness”.
Caledonian says
As opposed to a fitness function that finds configurations that are best at replicating themselves according to a fitness function that rewards replication?
The applications of evolutionary algorithms to actual design tasks inevitably involves finding fitness functions that describe our desires and goals, then letting evolution proceed. How is that not interesting?
Keith Douglas says
There’s some work in the evolutionary game theory modelling community by Peter Danielson and others that does sort of illustrate competing for resources by different strategies. I think some of it might be a little oversimplistic for other reasons, but it is a start.
DonCulberson says
I had to laugh. The random quote that appeared along with this entry when I surfed in this morning was the one from Ken Miller on “Firing Line” about the theoretical deficiencies of those who argue against evolution. How apropos was that?
I have frequently noticed that the random quote generator seems more “intelligent” than the word “random” would imply. Design? “You be the judge!” LOL
Uncle Don
T_U_T says
.
If you make fitness implicit in the enviromnet, instead of explicit, it will surely make your evolutionary simulation more fancy, and the results more suprising, but the evolution of the system will be exactly the same. It is similar to defining explicitly X = 4 vs defining it implicitly, like 3X + 4 = 4X.
Jonathan Badger says
No, that’s circular. Evolutionary simulations use fitness functions to *determine* the probability of reproduction of a “genome” in the next generation.
Okay, perhaps I should rephrase my opinion to “not interesting from the standpoint of furthering our knowledge of biological evolution”. Such genetic algorithms might be indeed practical tools (although most textbooks on machine learning tend to favor Bayesian models over such biologically inspired algorithms such as neural nets and genetic algorithms).
Jonathan Badger says
No, I don’t think it is the same. For example, “Tierra” allows parasites to evolve. Traditional GAs don’t — how can the “evolution of the system” be the same in environments with and without parasites?
T_U_T says
That is because explicit fitness is usually defined as independent of other organisms in the system and implicit usually as dependent of. But this is not inevitable. You can also write implicit functions that ignore all other virtual critters and explicit functions that take them in account
Paul W. says
Jonathan,
I generally agree that typical genetic algorithms are not enough like natural evolution, in some sense, but there are several interacting dimensions that make it hard to interpret results, even without throwing in wildly nonlinear interactions like predation, parasitism, and mutualism.
One basic problem with simple GA’s is that they’re very greedy search algorithms—they tend to reward the first barely-viable or significantly-better solution and it outcompetes everything else after a relatively few generations. (Think MS-DOS and Windows.) There’s one population in an “everybody sleeps with everybody love heap” and all the progeny competing head-to-head.
That means that a genotype with a novel strategy is very likely to be weeded out before it has refined that strategy enough to be competitive; it’s like expecting babies to compete for food with teenagers.
If you look at real environments, there’s really a whole bunch of semi-isolated sub-environments with different fitness functions, so that novel solutions often don’t have to compete head-to-head with established winners. In a coral reef, for example, there are many different-sized spaces for small, medium, and large animals to occupy, without having to compete with much-larger animals, as well as various gradients. (Light, water velocity, nutrients, etc.)
Even without predation/parasitism/mutualism, and only looking at peer competition, this is important for making real evolution vastly less greedy than typical GA’s. Small, maneuverable species can thrive in networks of small spaces, protected from big fast ones which can’t get in at all, or can’t maneuver well enough. At some later point, these animals may evolve to be bigger, moving into larger and larger spaces and competing head-to-head with big animals that evolved by a different route. (Or the successful big animals may eventuallly get dwarfed and invade the smaller animals’ environments.)
It seems to me that there are effectively many “dimensions” like this that more-or-less segregate “farm teams,” acting as incubators for novel strategies which may eventually break out and compete in the larger arena. Even in environments that are less geographically or geometrically varied than a coral reef, there are lots of differences between places, e.g., simple gradients effectively making qualitatively different regimes. (E.g., in water, floating at the surface vs. resting on the bottom vs. swimming around in the middle.)
This makes me think that GA’s ought to typically be more like some alife environments, with semi-isolated subpopulations subject to different fitness functions that differ in several dimensions. So, for example, if you’re trying to evolve bots that are both fast and maneuverable some could thrive in an environment where speed is weighted more than maneuverability, while very different ones thrive in an environment where that bias is reversed.
(Imagine, say, a 5×5 grid of sub-environments, where one dimension is “speed” and the other is “agility”. Everybody starts in the slow/clumsy corner, but there are different routes to the fast/agile corner. Variants that get fast first would move along one dimension, and variants that get agile first would move along the other, only competing when they evolve along both dimensions and meet somewhere in the middle. Or maybe a species maintaining a reasonable balance would get there first, evolving along the diagonal without falling into the dead ends along the edges…)
I seem to recall reading (10 years ago?) about some a-life simulations with networks of semi-isolated subenvironments, but all with the same fitness function. (Simply deferring competition had a big effect on diversity and resulted in some neat graphics showing different color-coded variants in different regions, occasionally breaking out and sweeping through each others’ niches.) If nothing else, it was a vivid demonstration of how greedy a normal GA is by comparison, and demonstrated some speciation effects. Still, it seemed very greedy to me, because a fundamental problem is the uniformity of the fitness function. (I don’t know what’s been done along those lines since.)
One thing that struck me back then was that predation, parasitism and mutualism may often serve mainly to vary the fitness function over time, so that different evolutionary pathways are explored. They introduce big perturbations into the search and break a lot of dead-end solutions. Unfortunately, they also introduce all kinds of weird nonlinear effects, making the results difficult to understand.
I think it would be interesting to avoid those complexities and introduce simple variations in the fitness functions first, e.g., multiple gradients. (Maybe somebody’s done that by now… I haven’t kept up with GA’s or AL.)
I think this all has something to do with Dembski’s gripes about targets, but I’m not sure exactly what. He’s sorta right that fixed fitness functions pre-define a single gradient tilted toward right answers. This makes them efficient in some sense—they don’t waste a lot of effort running off in other directions—but they often work in spite of it making them too greedy. Natural evolution is more robust because rich environments maintain diversity, so more novel solutions will be found.
I think there are some deep issues there about what’s a realistic fitness function and what constitutes “rigging” one, especially with respect to the evolution of modules that will be useful in the long term. Light-sensing, for example, has a bunch of uses in a bunch of different environments. (Even without developing imaging vision, it can be useful in shade-seeking to avoid UV damage, in detecting whether a big predator is swimming over you, tracking what time of day it is to predict later temperature changes, etc.)
The utility of light sensing for all of these non-obvious purposes serves as a great big hint to a big dumb search algorithm that light can convey useful information. The dumb search algorithm doesn’t have to know that imaging vision is possible or desirable, because light is so good at conveying information that various cheesy hacks are simple and useful in many environments—even a very dumb algorithm will latch onto light-sensing.
But it seems to me the only rigging there, if any, is in the nature of 3D space and light, and randomly variations in environments creating various gradients and networks of sub-environments. The rest follows a la Darwin.
I guess that’s why God said “let there be light”—so that even a big dumb algorithm could mostly hill-climb very circuitiously up to us, and make Darwin right. (There is no God but God, and Darwin is clearly his greatest prophet.)
lt.kizhe says
I agree that GAs (specifically, Thomas’ example) don’t model evolution well, in that they focus narrowly and abstractly on one particular (albeit important) mechanism, and ignore the dynamics of a changing environment. However, that’s not really what the current argument with IDists is about: the argument is precisely about abstractions like whether evolution can add any information, or achieve anything we might call “design”. Basically, it’s just a new costume for the old Creationist claim that No Good Thing Can Come From Random Processes (gosh — IDists recycle old Creationist arguments? Who’d’ve thunkit?). And even simple, abstract GA examples like this one refute that claim handily (as indeed, does the entire field of stochastic optimization).
NatureSelectedMe says
I just want to say what a great insightful comment by Paul W. If this were Slashdot, it would be modded a 5. The only error I can see is when you said of Dembski He’s sorta right. Tsk, Tsk. That can never be. We’ll have to disregard your whole comment. :)
thwaite says
Dave Thomas’ explanation of GA is excellent (and I’ve seen other attempts – it’s not easy). Thanks PZ for highlighting it.
In addition to such formal studies, applications of GA-like discovery procedures to real-world problems like drug and other molecule design is not only closer to nature and less prone to charges of “predetermined results”, there’s probably some money to be made. A google on ‘evolutionary chemistry’ includes hits like this:
www . cropsolution . com/technology.html
>Cropsolution utilizes two primary technologies, Targeted Biologyâ¢
and Evolutionary Chemistryâ¢.
…
>Evolutionary Chemistry⢠projects at Cropsolution include a NSF-SBIR funded proof of concept experiment aimed at extending the capability of RNA catalysts to synthesize small molecules. This proof of concept was successful, demonstrating the ability of the technology to evolve RNA catalysts for the synthesis of sulfonylureas, an important class of herbicides and pharmaceuticals.
(I have no affiliation with cropSolution, and note their web page hasn’t been updated since 2004 or so.)
Caledonian says
Of course it’s circular! It doesn’t make any bit of difference whether the fitness function is explicitly defined, or arises as a consequence of a more complex environment. It is exactly the same as defining a set of algorithms that define the environment and letting *that* define the fitness function.
Caledonian says
Not only is Paul W. wrong for suggesting that Dembski is correct about something (tee hee!), he’s also more generally (and genuinely) wrong. There’s nothing about predefined fitness functions that inevitably leads to single gradients tilted towards particular answers.
We use predefined functions for demonstrations for several reasons. They accurately represent examples of genuinely evolutionary change, for one, and they’re great ways of showing that evolutionary processes can result in “specified information”. But more importantly, there simply is no difference in the results. It doesn’t matter if I generate a random environment that has a fitness space of a certain shape, or dictate the fitness space of the simulation by fiat. The evolutionary responses is the same.
Halfjack says
GAs aren’t generally used to answer questions about evolution or natural selection — they are an attempt to harness the process for actual problem solving. Of course reversing the purpose causes problems. Fitness functions are there as an improvement on real world evolution so that the system converges on a useful solution in the experimenter’s life time.
John Koza’s work at Stanford has found some startling solutions to analog circuit design with genertic programs. I think he may have stumbled on some legal problems too, given that his software can generate patent infringements without direction from a human. Is the patent valid if a machine can crunch it out?
Jonathan Badger says
That’s silly. Of course predefined fitness functions are tilted towards particular answers. Given enough time (and sufficient sources of variation, such as mutation and crossovers, to get over local optima), the “optimal” solution will be found. There is no “optimal” solution in real biology.
Jonathan Badger says
True, but even then they are not particularly powerful. Even bioinformaticians generally prefer HMMs and support vector machines over GAs when wanting to solve real world machine learning problems (like gene finding). Early work on gene finding did try GAs, but modern genefinders such as GENSCAN are almost exclusively HMM based.
I have to admit Koza’s work is more interesting than GAs. Genetic Programing actually generates programs in a Turing-complete language such as LISP. While he still applies predefined fitness functions, he is doing far more than optimizing parameters as in GAs.
Caledonian says
You’re leaving out the most important part of my objection — the part that, once removed, makes my objection invalid. Curious, that. A more suspicious person might attribute that to a hostile and dishonest frame of mind on your part. But as I’m all smiles and sunshine, I’m sure it was just a misunderstanding.
Predefined fitness functions do not inevitably lead to single gradients tilted towards particular answers. Obviously all interesting fitness functions will have non-trivial optima, but those optima do not need to be known explicitly (although of course they are defined by the function whether we know what they are or not. Functions can lead to complex and subtle fitness spaces in which there isn’t necessarily any single best answer.
There are indeed optimal solutions in real biology. It’s not a coincidence that sharks, dolphins, and whales all have extremely similar body designs. Convergent evolution is very real, and though we don’t yet know how similar two organisms must be in order for their evolutionary development to take similar paths, we know that it happens often.
It’s also worth pointing out that, although many fitness spaces in real biology change as the organisms that inhabit them change, this is not always the case. Finding effective ways to move through water, for example, depends almost entirely on the physical properties of water. Those do not change when ecosystems change. Even then, there is no “best” solution because organisms have different needs for motion. When we compare organisms that have similar needs, we find strikingly similar propulsion methods in many cases. There are indeed “best” local solutions, given organisms’ structures and substances.
Jonathan Badger says
1. This could only be true even in theory for cases in which there is only a single fitness function for all organisms.
2. In practice, you’d have to know a lot about the effect of the environment to come up with the same fitness function as the simulation would. What’s truly interesting about biological evolution is that, as far as we can tell, nobody with knowledge is defining fitness functions; it all emerges from the environment itself.
Caledonian says
1. Those are precisely the cases we are most interested in — those evolutionary algorithms are the ones we can use to actually produce results. Cases where fitness is determined by referencing the states of other, evolving entities might be fun for a game, but they’d have little practical purpose.
2. Not if it was a very simple environment. Such as, for example, that found in a primitive demonstration simulation designed to illustrate some basic concepts.
Jonathan Badger says
If you define “shortest path” as your fitness function, you *will* find the shortest path (unless your GA is poorly implemented, or you don’t let the program run long enough). The fact that there may be two (or more) equally short paths doesn’t really change the fact.
I speak here with two hats: one as a evolutionary genomicist who is interested in how genomic features evolved, and secondly as a bioinformatician who is interested in any machine learning technique that can help me get more out of genomic data. While I was fascinated by GAs when I first learned about them 20 years ago in high school after reading a column of A.K. Dewdney’s in _Scientific American_, as a practising scientist I really don’t see how they are particularly useful in either case. They *are* a good introduction to the idea of natural selection for students, I admit.
Caledonian says
Well, yeah. That’s the whole point of using evolutionary algorithms, isn’t it.
How interesting. While I am not a member of either field, I know quite a few electrical engineers and computer scientists who are very excited about using GAs to produce circuitry that is both efficient and fault-tolerant, something that human engineering simply isn’t good at. If you don’t see much application for improved computational systems in your fields, perhaps you ought to broaden your field of view a bit.
Torbjörn Larsson says
“There is still a human defined fitness function “short is good”.”
First a nitpick. There was also “not linking to stable nodes is bad.”
Essentially Thomas is doing a more realistic experiment than the WEASEL. It may be a loss to explain experimental setups to a creationist, but it is completely justified as such. Hopefully sane but uneducated minds will have easier to grasp that.
“They *are* a good introduction to the idea of natural selection for students, I admit.”
I think that is the only goal here. And these “students” doesn’t want to learn. Thomas article will be a good pointer to the usual creocrap about GAs. Until someone realises your suggestion, which is even better.
Unfortunately, as in evolution, the perfect solution to promote enlightenment and defeat politic agendas will never be achieved. Evolution simulators will probably always be attacked. Evolving fitness function programs will probably be quite complicated and have long codes, which are other obstacles.
“Even bioinformaticians generally prefer HMMs and support vector machines over GAs when wanting to solve real world machine learning problems”
Thanks for a valuable pointer!
Paul W. says
Caledonian,
I didn’t say or mean to imply that simple GA’s are biased toward particular answers in the sense that Dembski needs for his argument, which I agree is a load of crap.
I also agree that there’s nothing wrong with predefined fitness functions, unless you’re trying to demonstrate something about the emergence of fitness functions. (That’s why I proposed an algorithm that uses (a set of) predefined fitness functions.)
My criticism of simple GA’s is about a domain-independent bias in search structure. This inevitably biases things toward some set of particular answers and away from others, but that’s a far cry from what Dembski needs for his argument that it’s just sneaking the solution into the fitness function.
I do understand that even despite their greediness, even simple GA’s can be useful and interesting; contrary to what Dembski says, they can produce true novelty. And of course picking on simple GA’s is a straw man in terms of showing what natural evolution is capable of; as I explained, it’s not nearly as greedy at bigger scales of space and time, and is therefore capable of finding tons novelties that simple GA’s are not.
Sorry if my phrasing made it sound like I was agreeing with Dembski more than I was; I expected it to be obvious that I don’t really buy any of Dembski’s arguments. I think that if you fixed them up a bit, they’d lead to the opposite conclusions. (Hence my ironic but almost-serious comment to the effect that “Darwin was right, therefore God exists.”)
Caledonian says
True. You said that GA’s (nothing about ‘simple’) with predefined fitness functions have single gradients tilted towards particular answers.
That’s also a load of crap. Instead of defending the statement, you instead deny having said a superfically similar but almost completely different statement. A more suspicious person might view that as suggestive of a dishonest and hostile mindset; fortunately I’m all smiles and sunshine, so I presume it was merely a misunderstanding on your part.
Well, yes — that’s what fitness functions do. No matter how broad the definition of fitness is, and regardless of whether there are many or few strategies that produce it, a fitness function will incline populations towards some strategies and not others. This is a trivial point that quite obviously proceeds from rudimentary facts. The fact that you’ve made it into an argument is… peculiar.
Given a conflict between what you say you mean and the meaning of what you said, I am not inclined to accept your assertions at face value.
simonh says
Paul W: If you’re interested in how some people avoid the exploitation over exploration, do a google for “island genetic algorithm”, there you have multiple, weakly interacting populations. It’s probably like the A-life thing you were thinking of.
Caledonian: I don’t know if you can really use the word “optimal” in relation to biology – all you can really say is that these things are the most sufficient. I think this is one of the big differences between natural and artifical evolution – in artificial evolution our fitness function pushes towards optimal, but nature only seeks the sufficent.
Paul W. says
Caledonian, Caledonian, Caledonian. Sigh.
What I said was “He’s sorta right that fixed fitness functions pre-define a single gradient tilted toward right answers.”
Here’s the whole paragraph:
I think it was clear to everyone else here what I did and didn’t mean by that, given the preceding and subsequent stuff about greediness of search, etc., which you ignore.
Given what I proposed, I am obviously not bothered by fixed, i.e., pre-coded, algorithmic fitness functions as Dembski is.
I am bothered by “fixed” fitness functions in a different, higher-level sense—what you might call “effective fitness functions,” which emerge from things like modeling predation or mutualism in some algorithms, or simply by the availability of alternate paths through a fixed space of sub-environments, as in my proposal.
There is no contradiction there. There really are ambiguities in words like “fixed” and “fitness function,” which you have to resolve by understanding the other stuff I said before and after what you latched onto. I said a lot of stuff, precisely so that people could clearly understand how to resolve those ambiguities and see what I actually meant, which is different from what Dembski means on several important counts.
In particular:
1. Use of a “fixed” (pre-coded, explicit) algorithmic fitness function does not in itself introduce an illegitimate bias or constitute “rigging” in any sense that precludes GA’s producing actual novelty.
2. Use of a “fixed” (uniform across time and environments) effective fitness function does introduce an important bias, but not of the kind Dembski needs for his argument.
3. That bias makes simple GA’s rather lame in a significant way. (I may have omitted the qualifier “simple” or “typical” in one case, which you latched onto, but it should be dead obvious what I meant despite that lapse. I described a GA without that bias, and explicitly said that GA’s “ought to be” different, so obviously I was referring to most current GA’s, or typical GA’s, or simple GA’s, or something, and obviously not referring to all possible GA’s. Score a niggling point for you, but duh!, Cal.)
Note that after the offending sentence in the paragraph above, I proceed to talk about efficiency vs. breadth of search. That’s my concern, get it? The kind of bias I’m talking about is domain-independent, though of course it affects which particular solutions are likely to be found given any particular problem posed. The latter is only trivially true, I agree, but it’s true. I acknowledge that trivial truth and its triviality, even if Dembski does not.
Now try to understand my use of the terms “uniform gradient” and a “tilt” toward “right answers.” Let’s invert our fitness function and think of “gradient descent” rather than “hill-climing.” They’re equivalent and the former is more useful for understanding what I mean by a “single gradient” or “tilt.”
Suppose you have one of those little maze puzzles where you’re trying to roll a steel ball into a hole. If you simply tilt it toward the hole, is the ball likely to go into the hole? No, because it’s a maze. You have to tilt it first one way, then another, over and over, to get the ball to navigate the maze.
Now let’s make the analogy closer. Instead of one steel ball, you have a bunch. (I.e., a population of potential solutions.) You tilt the maze toward the hole. Do any of the balls go in the hole? Probably not. You shake the maze, generating random variants. Do any of the balls go in the hole? Probably not.
Now try something else. Tilt the maze one way and shake it. Then tilt it another way and shake it. Keep doing that. Do any of the balls go in the hole? Probably, because they spread out in different dimensions and find dissimilar pathways, some of which are likely to get around nontrivial obstacles that using a single-direction tilt toward the hole could not.
My objection was never to having a pre-defined critierion for right answers, or to tilting the puzzle, or to tilting it toward the hole sometimes. Dembski may have problems with that, but I never have.
My objection is to tilting the puzzle consistently toward the hole and either
(1) expecting that the ball will go in the hole, i.e., overestimating simple GA’s search abilities, or
(2) thinking that this is representative of how natural evolution is supposed to work, and reveals a profound lameness in evolution by natural selection.
I think these things were obvious to everyone but you from my first posting. If you’d read and comprehended the stuff before and after the offending sentence, you’d realize that for Dembski’s argument, the key words in the sentence are “fixed” in the sense of pre-coded, and a “tilt” toward “right answers” in some sense that’s illegimate. I think he’s wrong about all that.
My concerns are quite different. I am only concerned with the problem that fixed effective fitness functions introduce a single tilt directly toward a certain criterion of fitness (and indirectly toward bad local optima in the fitness landscapes). This isn’t illegitimate—it’s just lame. The implications of that are very different from what Dembski thinks, and funnier. (If even algorithms as lame as simple GA’s work as well as they in fact do, evolution must be a whole lot easier than Dembski wants to admit.)
Part of the problem here is that I was being a little bit funny, at first overstating the extent to which I think Dembski is “sorta right,” and proceeding to show that he’s almost completely wrong about everything. He’s only partly right about a couple of things, and only in a trivial sense which he conflates with vastly more interesting senses, so he’s wrong, wrong, wrong.
But please notice that even if we take the above-quoted paragraph out of context, it starts with three—count ’em, three—weakening qualifiers—that what I’m concerned with has “something to do with” what Dembski’s concerned with, but “I’m not exactly sure what,” and he’s (only) “sorta right that…”
Unfortunately, such subtleties tend to get lost when people get paranoid, and I’m beginning to think you are a bit paranoid. You’re intolerant of ambiguity, even when disambiguating information is right there, and gets repeated. You are too prone to interpreting things as stupidity or dishonesty, when a better explanation is available—e.g., that you’ve resolved an ambiguity incorrectly, or that somebody made an understandable and non-contemptible mistake, or misspoke, or something like that.
If you want to suggest that I’m stupid, ignorant, irrational, or dishonest—or state such things outright, as you’ve done before, you go right ahead.
But if you do, I’ll say this in all seriousness: Caledonian, I think you are a little bit paranoid; seek professional help. I’m tired of this.
Now if anybody besides Caledonian has a problem with what I wrote, I’d be happy to address their concerns.
Caledonian says
Translation: what I meant to say was perfectly coherent, so you should ignore what I said and just assume my statements make perfect sense.
Sure, Paul W. Sure.
Paul W. says
I’m glad we got that settled.