Jason Rosenhouse “Among the Creationists”

by Sean Pitman

.

Jason Rosenhouse

Dr. Jason Rosenhouse is an associate professor of mathematics at James Madison University in Virginia.  In his spare time, he enjoys engaging in the debate over Creation vs. Evolution and has even written an interesting book about his experiences visiting creation events around the United States entitled, “Among the Creationists.”

I personally enjoyed reading this book very much, and found it to be very similar to my own experience discussing this topic with both creationists and evolutionists alike.  Most on both sides of this issue simply aren’t very well informed about arguments from the opposing side, or even from their own side of the debate.  I think reading about these experiences is helpful, for those on both sides of this issue, to at least start to understand how much we seem to be talking past each other, and to understand the mindset and motivations of those within opposing camps.

Harry AllenI was asked to read Rosenhouse’s book by Harry Allen, a radio host for “Nonfiction“, the on-air magazine of the arts, ideas, and fact in New York (airs Fridays, 2-3pm, on WBAI-NY/99.5 FM (wbai.org), flagship of the non-commercial Pacifica radio network).  He wanted me to have an on-air debate with Dr. Rosenhouse regarding his book and the Evolution/Creation debate at large and why I’m involved with this debate and have written my own book, “Turtles All the Way Down.”  So, on Friday, January 17th, we did just that (see Link below).

Jason Rosenhouse vs. Sean Pitman

Jason Rosenhouse Book

I personally enjoyed the discussion very much, and I think that Dr. Rosenhouse did as well (given his description of our debate on his own Blog).  As far as a summary of our discussion, Mr. Allen started us off with asking us something about ourselves, how we became interested in the Creation/Evolution debate, and why we wrote our books.  He then asked me what I thought of Dr. Rosenhouse’s book and commented that he also really enjoyed reading the book.  I explained that I really enjoyed reading “Among the Creationists”, that I read the whole book in one sitting, and agreed with most of what Dr. Rosenhouse had to say, at least 85% of it.  Of course, Mr. Allen then asked me what parts I didn’t agree with…

Turtles All The Way Down2

When I first started reading, “Among the Creationists”, I thought the book was going to be more about the problems with standard Creationist arguments, and the rational scientific reasons why evolution had to be true in light of these arguments.  I was especially intrigued by the fact that Dr. Rosenhouse is a mathematician and the cover to the book explained that creationists get into real trouble whenever they start to get into a discussion with Dr. Rosenhouse within his own field of expertise – i.e., mathematics/probability arguments for intelligent design.  However, as I read through the book, I was disappointed to discover that Dr. Rosenhouse had not included a single mathematical/probability argument in favor of the creative potential of the evolutionary mechanism of random mutations and natural selection.  In fact, as is almost always the case for modern neo-Darwinists, he claimed, in his book and during our debate, that the modern Theory of Evolution is not dependent upon mathematical arguments or statistical analysis at all – at least with regard to the creative potential of random mutations and natural selection.  In this, he seemed to argue that his own field of expertise is effective irrelevant to the discussion – that, “It is the wrong tool to use.”    Beyond this, he also explained that he wasn’t a biologist or a geneticist and that any discussion of biology would require bringing in someone with more expertise and knowledge of the field of biology than he had.

At this point I began to wonder why we were having a debate at all if his own field of expertise was, according to him, effectively irrelevant to the conversation and that he was not prepared to present arguments from biology or genetics regarding the main topic at hand – i.e., the potential and/or limits of the evolutionary mechanism of random mutations combined with natural selection.

Beneficial Sequences in Sequence Space Ocean

Beneficial Sequences in Sequence Space Ocean

Perhaps this is the main reason why Dr. Rosenhouse started to get rather irritated and flustered in the second half of our debate when I started asking him to explain how the evolutionary mechanism could reasonably and predictably do what he claims it did – i.e., actually come up with new functional systems beyond very low levels of functional complexity.  In order to do this, of course, the evolutionary mechanism would have to get across an extremely vast ocean of non-beneficial sequence options at these higher levels of functional complexity in order to somehow find the very very rare beneficial islands or “steppingstones” in an otherwise very empty and very large ocean (i.e., many times larger than the size of our entire universe).  How could these extremely rare beneficial steppingstones actually be discovered in a timely manner?  After all, as I explained during our discussion, the ratio of beneficial vs. non-beneficial sequences decreases, exponentially, with each increase in the level of functional complexity of the protein-based system in question – and observation that holds true for every language/information system that is known to us (to include the English language system, Russian, German, Italian, Morse Code, computer code, DNA codes, etc.).  Given this exponential decline in the ratio of beneficial vs. non-beneficial, how then can random mutations continue to find, via a truly random search algorithm, new qualitatively novel beneficial sequences at higher and higher levels of functional complexity within what anyone would consider to be a reasonable amount of time?

stepping-stonesIn response to what I consider to be a fundamental challenge for evolutionary theory, Dr. Rosenhouse made a very interesting claim that I’ve heard only a couple of times before in my debates a few years ago on Talk.Origins.org.  Dr. Rosenhouse explained that while I was correct that the beneficial steppingstones in sequence space are very very rare indeed, and that the ratio of potentially beneficial sequences vs. non-beneficial sequences does in fact decrease in an exponential manner with each increase in the level of functional complexity (i.e, the minimum structural threshold requirements, which includes the minimum size and specificity requirements of more and more complex systems), that this concept is entirely irrelevant to the potential for evolutionary progress.  He argued that regardless of what the beneficial ratio might or might not be at a particular level of functional complexity evolution could still make real progress in a reasonable amount of time (i.e., this side of trillions upon trillions of years).  When I asked him to explain how this might be so, this is what he said in a nutshell:

“Because the very rare beneficial steppingstones are not randomly scattered around Lake Superior, but are lined up in a straight line one right after the other, it is easy to cross the lake from one shoreline to the other along a path of closely spaced steppingstones…”

It is in this way, Rosenhouse argued, that one can walk from one side of a massive lake or ocean, regardless of its size or the overall rarity of steppingstones, all the way to the other side without having to get into the water and blindly swim around randomly in an effort to try to find another steppingstone.  Therefore, the most important concept to understand is not the rarity of the steppingstones, but “how they are arranged in sequence space”.  So, it’s the arrangement of the beneficial steppingstones, not their rarity, that is key to understanding the creative potential of the evolutionary mechanism.  That is why such statistical arguments having nothing to do with understanding the potential and/or limits of the evolutionary mechanism – according to Rosenhouse.

This is a great argument!  Why didn’t I think of it before?  After all, this proposal does actually solve, quite nicely, the statistical problems for the evolutionary mechanism. In fact, it’s such a neat solution to the problem that it seems downright obvious – especially from the perspective of a mathematician who has apparently had little exposure to the real world of biology or genetics.  Given some understanding of the real world of proteins and how the rare beneficial or even stable sequences are actually distributed in sequence space, one starts to see the real problem for Rosenhouse’s hypothesis – i.e., it just doesn’t match the real world.  Unfortunately, science isn’t just based on hypotheses and theories that work well on paper.  It is based on theories that actually represent empirical reality in a testable potentially falsifiable manner.  Now, it isn’t that Rosenhouse’s hypothesis isn’t testable.  It is testable.  The problem is that it fails the test.  It simply doesn’t represent reality.  In the real world that exists outside of Rosenhouse’s very neat imagination, the distribution of potentially beneficial, or even stable, protein sequences with sequences space is known by experimental observations.  And, contrary to Dr. Rosenhouse’s wonderful solution to the problem, the distribution of real beneficial protein sequences is not a linear distribution, but a random, effectively uniform, distribution that becomes more and more so with each step up the ladder of functional complexity.  At higher and higher levels of sequence space, as the total number of protein sequences within the space grows exponentially, the ratio of potentially beneficial vs. non-beneficial sequences decreases exponentially – and the distribution of these more and more rare beneficial steppingstones becomes more and more uniformly random in appearance.  It’s much like stretching a sticky sheet until it starts to get holes in it.  As the sheet stretches, the holes get exponentially bigger relative to the remaining material of the sheet.  Pretty soon, the bridges between areas of fabric start to break down and fall apart, leaving truly isolated islands of fabric (or “steppingstones”) that are not connected to any other island  (for a more detailed discussion of this particular problem see:  Link).

That, in a nutshell, is the fundamental problem for the theory of evolution – it’s mechanism simply isn’t capable of going beyond very very low levels of functional complexity this side of a practical eternity of time (trillions upon trillions of years).  And, the reason for this is the exponential decline in potentially beneficial options that random mutations are capable of evolving in a given span of time, combined with the non-linear or non-clustered distribution of these potentially beneficial sequences within sequence space.

This is a fundamental problem for evolution that natural selection simply can’t solve.  The reason for this is that, as explained during the debate, natural selection cannot look into the future.  It can only select, in a positive manner, what is working to some beneficial advantage right now.  Intelligence, on the other hand, can look into the future to see potential advantages to various combinations of sequences that natural selection could never envision – as Dr. Stephen Meyer explains:

Stephen Meyer“[Intelligent] agents can arrange matter with distant goals in mind. In their use of language, they routinely ‘find’ highly isolated and improbable functional sequences amid vast spaces of combinatorial possibilities.”

Stephen C. Meyer, “The Cambrian Information Explosion,” Debating Design, pg. 388 (Dembski and Ruse eds., Cambridge University Press 2004). 
.

What’s left at this point is very frustrating for evolutionists because, without a viable mechanism, pretty much all there is to prop up neo-Darwinism are what Rosenhouse referred to in our discussion as “circumstantial evidence” (35:34 in the discussion), to include the very popular argument that complex biomachines (like the human eye or the bacterial flagellar motor) are poorly designed and are full of “flaws” – that no one, certainly no omnipotent God, would have designed these machines with so many obvious flaws.  However, as far as I see things, the main problem with the “design flaw” arguments is that these arguments don’t really have anything to do with explaining how the evolutionary mechanism could have done the job.  Beyond this, they don’t really rule out intelligent design either because, even if someone could make it better, that doesn’t mean that an inferior design was therefore not the result of deliberate intelligence.  Lots of “inferior” human designed systems are none-the-less intelligently designed. Also, since when has anyone made a better human eye than what already exists?  It’s like my four year old son trying to explain to the head engineer of the team designing the Space Shuttle that he and his team aren’t doing it right.  I dare say that until Dr. Rosenhouse or Richard Dawkins can produce something as good or better themselves, that it is the height of arrogance to claim that such marvelously and beautifully functional systems are actually based on “bad design”.

Kenneth MillerBeyond this, consider the fairly well-known arguments of Dr. Kenneth Miller (cell and molecular biologist at Brown University) who once claimed that the Type III secretory system (TTSS: a toxin injector responsible for some pretty nasty bacteria – like the ones responsible for the Bubonic Plague) is evidence against intelligent design.  How so?  Miller argued that the TTSS demonstrates how more complex systems, like the flagellar motility system, can be formed from more simple systems, like the TTSS since it’s 10 protein parts are also contained within the 50 or so protein parts of the flagellar motility system.  The only problem with this argument, of course, is that it was later demonstrated that the TTSS toxin injector actually devolved from the fully formed flagellar system, not the other way around (Link).  So, as it turns out, Miller’s argument against intelligent design is actually an example of a degenerative change over time – i.e., a form of devolution, not evolution.  Of course, devolution is right in line with the predictions of intelligent design.  Consider, for example, that fairly easy to take parts away from a system while maintaining subsystem functionality.  It is another thing entirely to add parts to a system to achieve a qualitatively new higher level system of function that requires numerous additional parts to be in a specific arrangement relative to each other before the new function can be realized to any selectable advantage.  Such a scenario simply doesn’t happen beyond very low levels of functional complexity because the number of non-selectable non-beneficial modifications that would be required would take far far too long – i.e., trillions upon trillions of years. (See the following video of a lecture I gave on this topic – starting at 27:00):

.

Granite cubes

Finally, of course, there is the standard argument that intelligent design isn’t science because it produces nothing useful – i.e., no useful predictions.  Does this mean that SETI science isn’t a real scientific enterprise?  What about anthropology or forensic sciences, which are based on the scientific ability to detect deliberate design behind various artifacts found in nature?  How then is it any different to apply the very same arguments used to detect design in these various modern scientific disciplines in a universal manner?  And, if artificial features are also found within the DNA and/or protein-based systems of living things, so be it!  How long does one have to look for a mindless natural mechanism to explain something like a highly symmetrical polished granite cube, even if found on an alien planet like Mars, before it is recognized as a true artifact of intelligent design? – regardless of if this conclusion is deemed to be “useful” or not by this or that mathematician or scientist?

In this light, consider the fairly recent confirmation of a long-standing prediction of intelligent design regarding the likely key functionality of portions of non-coding or “junk-DNA”.  As it turns out, the more and more research that is done on non-coding DNA (DNA that does not code for proteins), the more and more functionality is being discovered.  Such discoveries simply weren’t predictable from the Darwinian perspective where many, such as Richard Dawkins in particular, had argued that non-coding DNA was evolutionary garbage or remnants of past trials and errors.  In contrast, those favoring the intelligent design or even the creationist position had long argued that at least some proportion of non-coding DNA probably had beneficial functionality to one degree or another.  And, it turns out that the predictions of intelligent design have proved true.  It seems like protein-coding genes are like the bricks and mortar for a house while the blueprint for what type of house to build resides within the non-coding DNA.

Along these lines, many creationists have highlighted the fairly recent published claims from the ENCODE human genome project. In 2012, the science journal Nature  published a very interesting news item (ENCODE: The human encyclopaedia, Sept 5, 2012).  This article reported on the ongoing human genome project called the “Encyclopedia of DNA Elements” or ENCODE project.  The scientists at ENCODE made a very startling, and very controversial, claim – that at least 80% of our genome is functional to one degree or another!

Of course, many scientists have responded rather strongly against this 80% functionality number (Link).  And, the truth of the matter is that, while non-coding DNA probably does represent the blue print for higher organisms, directing how protein-coding DNA functions to a significant degree, this does not necessarily or even likely mean that most non-coding DNA is actually required or useful.  After all, some ferns and salamander have genomes the same size or smaller than the human genome, and other ferns and salamander have genomes 50 times the size of the human genome.  Also, a Japanese pufferfish has a genome 8 times smaller than the human genome (just 385 million base pairs compared to 3 billion base pairs for humans), yet does just fine.  And, all of these plants and animals have approximately the same number of genes.  Clearly then, most non-coding DNA isn’t likely to be vital for life or even beneficially functional.  Yet, on the other hand, it does seem to be more important than the protein-coding genes themselves.

It seems, for instance, that it is the non-coding DNA that determines if a mouse or a pig or a monkey or a human is to be built given a set of very similar protein-coding genes to work with among all of these different creatures (for further discussion of this topic see: Link).

 

“I think this will come to be a classic story of orthodoxy derailing objective analysis of the facts, in this case for a quarter of a century,” Mattick says. “The failure to recognize the full implications of this particularly the possibility that the intervening noncoding sequences may be transmitting parallel information in the form of RNA molecules – may well go down as one of the biggest mistakes in the history of molecular biology.”

Wayt T. Gibbs, “The Unseen Genome: Gems Among the Junk,” Scientific American (Nov. 2003).

 

Leave a Reply