What properties should we expect from an evolved system rather than a designed one? Complexity is one, another is surprises. We should see features that baffle us and that don’t make sense from a simply functional and logical standpoint.
That’s also exactly what we see in systems designed by processes of artificial evolution. Adrian Thompson used randomized binary data on Field-Programmable Gate Arrays, followed by selection for FPGAs that could recognize tones input into them. After several thousand generations, he had FPGAs that would discriminate between two tones, or respond to the words “stop” and “go”, by producing 0 or 5 volts. Then came the fun part: trying to figure out how the best performing chip worked:
Dr. Thompson peered inside his perfect offspring to gain insight into its methods, but what he found inside was baffling. The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest — with no pathways that would allow them to influence the output — yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.
That looks a lot like what we see in developmental networks in living organisms — unpredictable results when pieces are “disconnected”, or mutated, lots and lots of odd feedback loops everywhere, and sensitivity to specific conditions (although we also see selection for fidelity from generation to generation, more so than occurred in this exercise, I think). This is exactly what evolution does, producing a functional complexity from random input.
I suppose it’s possible, though, that Michael Behe’s God also tinkers with electronics as a hobby, and applied his ineffably l33t hacks to the chips when Thompson wasn’t looking.

