Friday, July 28, 2006

Update on the Cornell Evolution and Design Seminar

Things have been developing in rather interesting ways in the Cornell "Evolution and Design" seminar. We have worked our way through all of the articles/papers and books in our required reading list, along with several in the recommended list. Before I summarize our "findings", let me point out that for most of the summer our seminar has consisted almost entirely of registered students (all but one undergrads, with one employee taking the course for credit), plus invited guests (Hannah Maxson and Rabia Malik of the Cornell IDEA Club). Two other faculty members (Warren Alman and Will Provine) attended for a while, but stopped in the middle of the second week, leaving me as the only faculty member still attending (not all that surprising, as it is my course after all - however, at this point I view my job mostly as facilitator, rather than teacher).

Anyway, here is how we've evaluated the books and articles/papers we've been "deconstructing":

Dawkins/The Blind Watchmaker: The "Weasel" example is unconvincing, and parts of the book are somewhat polemical, by which we mean substituting assertion, arguments by analogy, arguments from authority, and various other forms of non-logical argument for legitimate logical argument (i.e. based on presentation and evaluation of evidence, especially empirical evidence). Dawkins' argument for non-telological adaptation (the "as if designed" argument), although intriguing, seems mostly to be supported by assertion and abstract models, rather than by empirical evidence.

Behe/Darwin's Black Box: The argument for "irreducible complexity", while interesting, appears to leave almost all of evolutionary biology untouched. Behe's argument is essentially focused on the origin of life from abiotic materials, and arguments for the "irreducible complexity" of the genetic code and a small number of biochemical pathways and processes. Therefore, generalizing his conclusions to all of evolutionary biology (and particularly to descent with modification from common ancestors, which he clearly agrees is "strongly supported by the evidence") is not logically warranted. Attempts to make such extensions are therefore merely polemics, rather than arguments supported by evidence.

Dembski/The Design Inference and "Specification: The Pattern that Signifies Intelligence": Dembski's mathematical models are intriguing, especially his recent updating of the mathematical derivation of chi, his measure for "design" in complex, specified systems. However, it is not clear if empirical evidence (i.e. counted or measured quantities) can actually be plugged into the equation to yield an unambiguous value for chi, nor is it clear what value for chi would unambiguously allow for "design detection." Dembski suggests chi equal to or greater than one, but we agreed that it would make more sense to use repeated tests, using actual designed and undesigned systems, to derive an empirically based value for chi, which could then be used to identify candidates for "design" in nature. If, as some have suggested, plugging empirically derived measurements into Dembski's formula for chi is problematic, then his equation, however interesting, carries no real epistemic weight (i.e. no more than Dawkin's "Weasel", as noted above).

Johnson/The Wedge of Truth: To my surprise, both the ID supporters and critics in the class almost immediately agreed that Johnson's book was simply a polemic, with no real intellectual (and certainly no scientific) merit. His resort to ad hominem arguments, guilt by association, and the drawing of spurious connections via arguments by analogy were universally agreed to be "outside the bounds of this course" (and to exceed in some cases Dawkins' use of similar tactics), and we simply dropped any further consideration of it as unproductive. Indeed, one ID supporter stated quite clearly that "this book isn't ID", and that the kinds of assertions and polemics that Johnson makes could damage the credibility of ID as a scientific enterprise in the long run.

Ruse/Darwin and Design (plus papers on teleology in biology by Ayala, Mayr, and Nagel): Both ID supporters and evolution supporters quickly agreed that all of these authors make a convincing case for the legitimacy of inferring teleology (or what Mayr and others call “teleonomy”) in evolutionary adaptations. That is, adaptations can legitimately be said to have “functions,” and that the genomes of organisms constitute “designs” for their actualization, which is accomplished via organisms' developmental biology interacting with their environments.

Moreover, we were able to come to some agreement that there are essentially two different types of “design”:

Pre-existing design, in which the design for an object/process is formulated prior to the actualization of that object/process (as exemplified by Mozart’s composing of his final requiem mass); note that this corresponds to a certain extent with what ID supporters are now calling “front-loaded design”, and

Emergent design, in which the design for an object/process arises out of a natural process similar to that by which the actualization takes place (as exemplified by Mayr’s “teleonomy”).

In addition, the ID supporters in the seminar class agreed that “emergent design” is not the kind of design they believe ID is about, as it is clearly a product of natural selection. A discussion of “pre-existing design” then ensued, going long past our scheduled closing time without resolution. We will return to a discussion of it for our last two meetings next week.

As we did not use the two days scheduled for “deconstruction” of Johnson’s Wedge of Truth, we opened the floor to members of the class to present rough drafts/outlines of their research papers for the course. It is interesting to note that both papers so presented concerned non-Western/non-Christian concepts of “design” (one focusing on Hindu/Indian and Chinese concepts of teleology in nature, and the other on Buddhist concepts of design and naturalistic causation).

Overall, the discussion taking place in our seminar classes has been both respectful and very spirited, as we tussle with difficult ideas and arguments. For my part, I have come to a much more nuanced perception of both sides of this issue, and to a much greater appreciation of the difficulties involved with coming to conclusions on what is clearly one of the core issues in all of philosophy. And, I believe we have all come to appreciate each other and our commitments to fair and logical argument, despite our differences…and even to have become friends in the process. What more could one ask for in a summer session seminar?

Labels: , , , , ,

Thursday, July 13, 2006

D'Arcy Thompson and "Front-Loaded" Intelligent Design



AUTHOR: Salvador Cordova

SOURCE: Marsupials and placentals: A case of front-loaded, pre-programmed, designed evolution?

COMMENTARY: Allen MacNeill

The concept of "front-loading" as described in Salvador Cordova's post at Telic Thoughts bears a remarkable resemblance to the ideas of the Scottish biomathematician D'Arcy Thompson (1860-1948). In his magnum opus, Growth and Form, Thompson proposed that biologists had over-emphasized evolution (and especially natural selection) and under-emphasized the constraints and parameters within which organisms develop, constraints that "channel" animal forms into particular patterns that are repeated over and over again across the phyla.

However, while Thompson's ideas strongly imply that there is a kind of teleology operating at several levels in biology (especially developmental biology), Thompson himself did not present hypotheses that were empirically testable (sound familiar?):

Thompson did not articulate his insights in the form of experimental hypotheses that can be tested. Thompson was aware of this, saying that 'This book of mine has little need of preface, for indeed it is 'all preface' from beginning to end.'

Thompson's huge book (over 1,000 heavily illustrated pages) is a veritable gold mine of ideas along the lines articulated in Sal's post. However, Thompson's underlying thesis is just as inimical to ID as is the explanation from evolutionary biology. His argument is essentially that biological form is constrained by the kind of mathematical relationships that characterize classical physics. That is, there are "built-in" laws of form that constrain the forms that biological organisms can take. And therefore, physical law provides the “front-loading”, not a supernatural “intelligent designer.”

For example, Thompson pointed out that the shape that droplets of viscous liquid take when dropped into water are virtually identical to the medusa forms of jellyfish, and that this "convergence of form" is therefore not accidental. Rather, it is fundamentally constrained by the physics of moving fluids, as described in the equations of fluid mechanics. Thompson's book is filled with similar examples, all pointing to the same conclusion: that biological form is constrained by the laws of physics (especially classical mechanics).

Evolutionary convergence, far from departing from Thompson's ideas, is based on essentially the same kinds of constraints. Sharks, dolphins (the fish, not the mammals), tunas, ichthyosaurs, and porpoises all appear superficially similar (despite significant anatomical differences) because their external shapes are constrained by the fluid medium through which they swim. In the language of natural selection, any ancestor of a shark, dolphin, tuna, ichthyosaur, or porpoise that (through its developmental biology) could take the shape of a torpedo could move more efficiently through the water than one that had a different (i.e. less efficient) shape, and therefore would have a selective advantage that would, over time, result in similar shapes among its proliferating ancestors. The same concept is applied to the parallel evolution of marsupial and placental mammals: similar environments and subsistence patterns place similar selective constraints on marsupial and placental mammals in different locations, resulting in strikingly similar anatomical and physiological adaptations, despite relatively non-homologous ancestry.

This evolutionary argument is now being strongly supported by findings in the field of evolutionary development ("evo-devo"), in which arguments based on "deep homology" are providing explanations for at least some of the seemingly amazing convergences we see in widely separated groups of organisms. Recent discoveries about gene regulation via hierarchical sets of regulatory genes indicate that these genes have been conserved through deep evolutionary time, from the first bilaterally symmetric metazoans to the latest placental mammals, as shown by their relative positions in the genome and relatively invariant nucleotide sequences. These genes channel the arrangement of overall anatomy and body form throughout the course of development, producing the overall shapes of organisms and the relationships between body parts that we refer to when discussing evolutionary convergence.

However, as should be obvious by now, this in no way provides evidence for the currently popular ID hypothesis of “front-loading”, except insofar that it states that the hierarchical control of overall development evolved very early among the metazoa. It provides no empirically testable way to distinguish between an evolutionary explanation and a “design” explanation. Indeed, all of the evidence to date could be explained using either theory.

And so, by the rules of empirical science, since the evolutionary explanation is both sufficient to explain the phenomena and does not require causes that are outside of nature (i.e. a supernatural designer, that is neither itself natural nor works through natural – i.e. material and efficient – causes), evolutionary biologists are fully justified in accepting the evolutionary explanation (and disregarding the “front-loaded ID” explanation.

Only in the case that the kinds of natural causes described above (especially the ability of evo-devo processes to constrain the development of overall form via purely natural means via the known biochemistry of development) can NOT explain the patterns we observe in convergent evolution should we entertain other hypotheses (especially if those other hypotheses are not empirically testable). Only then, and not before…and therefore certainly not now.

FOR FURTHER READING:

For more on Thompson and his work, see:
http://www.google.com/search?hl=en&q=D%27Arcy+Thompson&btnG=Google+Search
and especially:
http://www-history.mcs.st-andrews.ac.uk/Mathematicians/Thompson_D'Arcy.html
and follow the links at:
http://en.wikipedia.org/wiki/D'Arcy_Thompson

Also, a thread that included a discussion of Thompson's work has already appeared at Telic Thoughts http://telicthoughts.com/?p=763

--Allen

Labels: , , , , , , ,

Thursday, July 06, 2006

Doggies are Better than Weasels



AUTHOR: Dave Thomas

SOURCE: Target? We don't need no stinking target!

COMMENTARY: Allen MacNeill

Over at The Panda’s Thumb, Dave Thomas has posted the results of another computer simulation of natural selection, this time applied to the classical “Traveling Salesman” problem. No, that isn’t the lead-in to an old dirty joke, it’s a classical problem in optimization. The basic idea is to calculate the shortest possible route for a traveling salesman to follow when visiting more than three cities (i.e. sales territories). Clearly, when there are only two cities, the solution is obvious to anyone with a knowledge of Euclidian geometry: a straight line connecting the two cities. However, as more cities are added, the number of possible solutions expands exponentially, making calculations of optimal pathways extraordinarily difficult.

This is where Dave Thomas (and a dish of soap bubbles) comes in. In his post, Thomas first shows the classical solution to a five-node traveling salesman problem (TSP), as demonstrated by the Swiss mathematician Jakob Steiner. He then illustrates the optimal solution using a soap film generator, which uses free-standing posts and soap films to generate the optimal solution.

Thomas then goes on to formulate a “solution engine” for higher-level Steiner problems (i.e with more than five asymmetrically placed nodes), using natural selection operating on a computer-generated “TSP solver.” The results are truly astonishing: although the theoretical number of possible solutions is fantastically large, the TSP solver using simple natural selection (call it the NS_TSPS) found several optimal solutions with amazing speed. The same thing happened when Thomas tested the computer-generated solutions using soap films. Indeed, he was able to show that the NS_TSPS was actually more efficient at finding solutions than the soap film generator, a result that surprised him (and most of the commentators on the Thumb). One of the soap-film solutions took the shape of a “doggie,” a solution that the NS_TSPS didn’t find. Thomas was able to show that, although the soap-film solution was stable, it was actually sub-optimal to an alternative solution generated by the NS_TSPS (hence the title of this post)

Why is all of this important, in the context of the ongoing debate over design in nature, as exemplified by Richard Dawkins' book The Blind Watchmaker? Because, unlike Dawkins’ WEASEL program, which used a pre-specified “target,” thereby opening his model to accusations that it simply “found” a pre-specified outcome (and was therefore actually an example of “intelligent design”), the NS_TSPS had no pre-specified solution at all, and found the optimal solutions the same way natural selection “finds” them in the wild: by simple trial and error, combined with preservation of partially successful outcomes.

In other words, the objections that some of us had to Dawkins’ WEASEL program have been addressed in Thomas’ NS_TSPS, and natural selection has been shown once again to be all that is necessary to “find” an optimal solution to a “problem,” even in the absence of a pre-specified outcome.

This is important to the ongoing discussion about design in nature for several reasons:

• It decisively undercuts the objections commonly voiced by advocates of ID, that all simulations of natural selection are actually simulating ID, as they all include pre-specified “target” outcomes.

• It shows the extraordinary (and somewhat counterintuitive) power of natural selection to “find” adaptive optima, even in the absence of pre-specified solutions.

• It reinforces a finding that has increasingly been coming out of research into computerized “genetic algorithms”: that selection processes that incorporate non-directed natural selection can find solutions to problems that are highly resistent to more “classical” targeted computation.

• It demonstrates that the common assertion by ID theorists that ID theory is logically necessary as an alternative to evolutionary theory, since the latter has failed to demonstrate empirically that it can solve such optimization problems in real time, is empirically false. That is, ID theory isn’t necessary to explain adaptation, even in cases where the computation of adaptive optima appears to be beyond the capability of any real-time computing system.

And this, in turn, emphasizes the point that I have made in several other posts to this blog: that rather than ID theory being a logically necessary alternative to evolutionary theory, it is a logically unnecessary addition to standard evolutionary theory, and one that furthermore is not supported by the empirical evidence.

FOR FURTHER READING:

There are other simulations of evolution by natural selection that are immune to the common objections voiced by ID theorists. To learn more about the most powerful one developed to date, check out Avida.

--Allen

Labels: , , , ,