INTRODUCTION AND SUMMARY: The scientific study of our origins helps us probe the roots of our existence. This gives it great importance.
Origins science is therefore a highly important -- albeit sometimes controversial -- field of study and research, especially on the live-wire topic of evolution. So, let us survey, starting with:
A: Methodological Naturalism (& Evolutionary Materialism) vs. empirical signs of Nature vs. Art:
. . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated . . . [["Billions and billions of demons," NYRB, Jan 1997.]
That is, before facts are allowed to speak for themselves, such evolutionary materialist thinkers hold that the only acceptable origins science theories are those that by design “must” fit in with the view that undirected blindly mechanical forces of nature and chance circumstances acting on matter and energy in one form or another, triggered purposeless changes and developments across time and are adequate to explain the world of life. In many cases, they may even assert that anything that questions such a view or its assumptions "is not science."
Which, to such minds, is close to saying: nonsense.
But, it should be clear that some very big assumptions are being made; assumptions that -- on their face -- could easily bias or even warp attempts to sincerely find out what really happened in the deep past of origins.
And, is it not reasonable that science should seek to discover and provide good observational evidence and objectively unbiased explanations about what really happens in our world -- and (so far as that is possible) about what really happened in the remote past of origins, without a priori ideological blinkers?
This is in effect what ID thinkers and scientists have argued in recent years.
In simple terms, noted ID Scientist William Dembski, argues:
We know from experience that intelligent agents build intricate machines that need all their parts to function [[--> i.e. he is specifically discussing "irreducibly complex" objects, structures or processes for which there is a core group of parts all of which must be present and properly arranged for the entity to function (cf. here, here and here)], things like mousetraps and motors. And we know how they do it -- by looking to a future goal and then purposefully assembling a set of parts until they’re a working whole. Intelligent agents, in fact, are the one and only type of thing we have ever seen doing this sort of thing from scratch. In other words, our common experience provides positive evidence of only one kind of cause able to assemble such machines. It’s not electricity. It’s not magnetism. It’s not natural selection working on random variation. It’s not any purely mindless process. It’s intelligence . . . .
When we attribute intelligent design to complex biological machines that need all of their parts to work, we’re doing what historical scientists do generally. Think of it as a three-step process: (1) locate a type of cause active in the present that routinely produces the thing in question; (2) make a thorough search to determine if it is the only known cause of this type of thing; and (3) if it is, offer it as the best explanation for the thing in question.Philosopher of Science Stephen Meyer similarly argues the same point in more detail in his response to a hostile review of his key 2009 Design Theory book, Signature in the Cell:
[[William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, pp. 20-21, 53 (InterVarsity Press, 2010). HT, CL of ENV & DI.]
The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . . In order to [[scientifically refute this inductive conclusion] Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . . .He then gives even more details, with particular reference to the origin of cell-based life:
The central problem facing origin-of-life researchers is neither the synthesis of pre-biotic building blocks (which Sutherland’s work addresses) or even the synthesis of a self-replicating RNA molecule (the plausibility of which Joyce and Tracey’s work seeks to establish, albeit unsuccessfully . . . [[Meyer gives details in the linked page]). Instead, the fundamental problem is getting the chemical building blocks to arrange themselves into the large information-bearing molecules (whether DNA or RNA) . . . .
For nearly sixty years origin-of-life researchers have attempted to use pre-biotic simulation experiments to find a plausible pathway by which life might have arisen from simpler non-living chemicals, thereby providing support for chemical evolutionary theory. While these experiments have occasionally yielded interesting insights about the conditions under which certain reactions will or won’t produce the various small molecule constituents of larger bio-macromolecules, they have shed no light on how the information in these larger macromolecules (particularly in DNA and RNA) could have arisen. Nor should this be surprising in light of what we have long known about the chemical structure of DNA and RNA. As I show in Signature in the Cell, the chemical structures of DNA and RNA allow them to store information precisely because chemical affinities between their smaller molecular subunits do not determine the specific arrangements of the bases in the DNA and RNA molecules. Instead, the same type of chemical bond (an N-glycosidic bond) forms between the backbone and each one of the four bases, allowing any one of the bases to attach at any site along the backbone, in turn allowing an innumerable variety of different sequences. This chemical indeterminacy is precisely what permits DNA and RNA to function as information carriers. It also dooms attempts to account for the origin of the information—the precise sequencing of the bases—in these molecules as the result of deterministic chemical interactions . . . .
[[W]e now have a wealth of experience showing that what I call specified or functional information (especially if encoded in digital form) does not arise from purely physical or chemical antecedents [[--> i.e. by blind, undirected forces of chance and necessity]. Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization. On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information. That cause is intelligence or conscious rational deliberation. As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process. Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence. This conclusion is not based upon what we don’t know. It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . . .
[[In conclusion,] it needs to be noted that the [[now commonly asserted and imposed limiting rule on scientific knowledge, the] principle of methodological naturalism [[ that scientific explanations may only infer to "natural[[istic] causes"] is an arbitrary philosophical assumption, not a principle that can be established or justified by scientific observation itself. Others of us, having long ago seen the pattern in pre-biotic simulation experiments, to say nothing of the clear testimony of thousands of years of human experience, have decided to move on. We see in the information-rich structure of life a clear indicator of intelligent activity and have begun to investigate living systems accordingly. If, by Professor Falk’s definition, that makes us philosophers rather than scientists, then so be it. But I suspect that the shoe is now, instead, firmly on the other foot. [[Meyer, Stephen C: Response to Darrel Falk’s Review of Signature in the Cell, SITC web site, 2009. (Emphases and parentheses added.)]
Thus, in the context of a pivotal example -- the functionally specific, complex information stored in the well-known genetic code -- we see laid out the inductive logic and empirical basis for design theory as a legitimate (albeit obviously controversial) scientific investigation and conclusion.
It is worth the pause to (courtesy the US NIH) lay out a diagram of what is at stake here:
|Fig I.0: DNA as a stored code exhibiting functionally specific complex digital information (HT: NIH)|
In this context -- to understand the kind of scientific reasoning involved and its history, it is also worth pausing to excerpt Newton's Rules of [[Inductive] Reasoning in [[Natural] Philosophy which he used to introduce the Universal Law of Gravitation. In turn, this -- then controversial (action at a distance? why? . . . ) -- law was in effect generalised from the falling of apples on Earth and the deduced rule that also explained the orbital force of the Moon, and thence Kepler's mathematically stated empirical laws of planetary motion.
So, Newton needed to render plausible how he projected universality:
Rule I [[--> adequacy and simplicity]
We are to admit no more causes of natural things than such as are both true [[--> it is probably best to take this liberally as meaning "potentially and plausibly true"] and sufficient to explain their appearances.
To this purpose the philosophers say that Nature does nothing in vain, and more is in vain when less will serve; for Nature is pleased with simplicity, and affects not the pomp of superfluous causes.
Rule II [[--> uniformity of causes: "like forces cause like effects"]
Therefore to the same natural effects we must, as far as possible, assign the same causes.
As to respiration in a man and in a beast; the descent of stones in Europe and in America; the light of our culinary fire and of the sun; the reflection of light in the earth, and in the planets.
Rule III [[--> confident universality]
The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
For since the qualities of bodies are only known to us by experiments, we are to hold for universal all such as universally agree with experiments; and such as are not liable to diminution can never be quite taken away. We are certainly not to relinquish the evidence of experiments for the sake of dreams and vain fictions of our own devising; nor are we to recede from the analogy of Nature, which is wont to be simple, and always consonant to [398/399] itself . . . .
Rule IV [[--> provisionality and primacy of induction]
In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.
This rule we must follow, that the arguments of induction may not be evaded by [[speculative] hypotheses.
This leads to the way that -- through a form of induction known as abductive inference to best explanation (cf. here for basics, and here for more complex details and issues) --
(a) certain explanatory hypotheses, models and theories [[E/M/T] can be identified as "best" current explanations of bodies of observed facts F1, F2, . . . Fn, andAn informational graphic will help us clarify:
(b) such E/M/T's can then be onwards correlated with bodies of accepted theories in science [[BOATs], leading to the onward development of science.
Since inductive reasoning -- roughly: arguments where givens or observed "facts" in evidence render a conclusion significantly more likely to be true [[cf. here and here] -- leading to provisionally inferred general patterns itself is now being deemed suspect in some quarters, it may help to note as follows from Avi Sion, on what he descriptively calls the principle of universality:
We might . . . ask – can there be a world without any ‘uniformities’? A world of universal difference, with no two things the same in any respect whatever is unthinkable. Why? Because to so characterize the world would itself be an appeal to uniformity. A uniformly non-uniform world is a contradiction in terms.
Therefore, we must admit some uniformity to exist in the world.
The world need not be uniform throughout, for the principle of uniformity to apply. It suffices that some uniformity occurs.
Given this degree of uniformity, however small, we logically can and must talk about generalization and particularization. There happens to be some ‘uniformities’; therefore, we have to take them into consideration in our construction of knowledge. The principle of uniformity is thus not a wacky notion, as Hume seems to imply . . . .
The uniformity principle is not a generalization of generalization; it is not a statement guilty of circularity, as some critics contend. So what is it? Simply this: when we come upon some uniformity in our experience or thought, we may readily assume that uniformity to continue onward until and unless we find some evidence or reason that sets a limit to it. Why? Because in such case the assumption of uniformity already has a basis, whereas the contrary assumption of difference has not or not yet been found to have any. The generalization has some justification; whereas the particularization has none at all, it is an arbitrary assertion.
It cannot be argued that we may equally assume the contrary assumption (i.e. the proposed particularization) on the basis that in past events of induction other contrary assumptions have turned out to be true (i.e. for which experiences or reasons have indeed been adduced) – for the simple reason that such a generalization from diverse past inductions is formally excluded by the fact that we know of many cases [[of inferred generalisations; try: "we can make mistakes in inductive generalisation . . . "] that have not been found worthy of particularization to date . . . .
If we follow such sober inductive logic, devoid of irrational acts, we can be confident to have the best available conclusions in the present context of knowledge. We generalize when the facts allow it, and particularize when the facts necessitate it. We do not particularize out of context, or generalize against the evidence or when this would give rise to contradictions . . .[[Logical and Spiritual Reflections, BK I Hume's Problems with Induction, Ch 2 The principle of induction.]
In all of this inductive analysis, we are to be responsible and humble, as Newton also pointed out in his 1704 Opticks, Query 31, where he laid out the generic scientific method much as we are now taught it in school:
[[Scientific] Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths . . . [[.]
And although the arguing from Experiments and Observations by Induction be no Demonstration of general Conclusions; yet it is the best way of arguing which the Nature of Things admits of, and may be looked upon as so much the stronger, by how much the Induction is more general. And if no Exception occur from Phaenomena, the Conclusion may be pronounced generally.
But if at any time afterwards any Exception shall occur from Experiments, it may then begin to be pronounced with such Exceptions as occur.
By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. [[paragraph breaks added.]
These ideas, broadly speaking, have so triumphed that everyone gives them at least formal acknowledgement. By that light, we may summarise "science at its best" as:
(a) The unfettered — but ethically and intellectually responsible — observational evidence-led pursuit of the truth about our world (i.e. an accurate and reliable description and explanation of it), that is
(b) based on observation, experiment, measurement, and inductive inference of laws and explanatory models or theories, where also it
(c) progresses in light of the mutually respectful discussion among the informed, that works to build (and sometimes to reform or replace) a reasonable consensus, i.e. the paradigm.Such, then, is the "ideal." (Cf. also Lakatos on science vs. pseudoscience.)
But in our day, through the claim that it is a mere methodological constraint, scientific explanation has been improperly subjected to the constraint that scientific explanations (especially on origins) must be naturalistic, i.e. must in the end trace to blind forces of chance and necessity acting on matter and energy across space and time. That is, it is often imposed, assumed or implied that intelligence must be derivative of such material phenomena.
(With this, Newton, as his General Scolium to Principia and his Query 31 to Opticks indicate, would sharply beg to differ. And, ironically, that implied -- or even openly asserted "must" -- raises the ironic question as to whether such "Lewontinian" evolutionary materialist scientism, even while dressed in a lab coat and proudly flying the banners of science and progress, is drifting into being an ideological imposition on science. This issue will be pivotal for our exploration of origins science.)
Philip Johnson therefore summarises and critically responds to the underlying a priori materialistic impositions, attitudes and views here, in light of many worldview level, logical and epistemological concerns. (NB: Epistemology is the philosophical study of how do we know, how confidently, how reliably)
It is thus worth pausing to watch or listen, as a first step for this course; not least, to help clear the air for fresh thinking:
But, all of this imposition of methodological naturalism also embeds a crucial scientific mistake, and that is a pivotal focus for this critical survey course.
Such can be seen intuitively, as the Roman thinker and statesman Cicero did c. 50 BC when he wrote as follows -- anticipating our modern sense of the extreme implausibility of complex, specific, functional information coming about through blind chance and mechanical forces such as gravity:
. . . Is it possible for any man to behold these things, and yet imagine that certain solid and individual bodies move by their natural force and gravitation, and that a world so beautifully adorned was made by their fortuitous concourse? He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius. I doubt whether fortune could make a single verse of them. How, therefore, can these people assert that the world was made by the fortuitous concourse of atoms, which have no color, no quality—which the Greeks call [poiotes], no sense? [Cicero, THE NATURE OF THE GODS BK II Ch XXXVII, C1 BC, as trans Yonge (Harper & Bros., 1877), pp. 289 - 90.]
Moreover, as Cicero intuitively understood on common sense, it is a well known, easily shown, reliably observable fact that (d) both natural and artificial causes leave observable and distinct empirical traces that we may therefore study and distinguish using scientific methods:
(a) NATURE: The Old Man of the Mountain, New Hampshire, USA; before its collapse in 2003. (Source: Wikipedia)
(b) ART: The profile of George Washington, Mt. Rushmore, South Dakota, USA; while under construction. (Source: Wikipedia)
Fig. I.1 (i) a: A known case of Nature vs. Art: chance circumstances and blind mechanical forces, vs. purpose. (NB: the images also show how art may imitate but be distinct from nature.)
Fig. I.1 (i) b: Dr Don Johnson et al. present a video summary of how we routinely tell nature from art, based on the rule of functionally specific, complex information. (Cf. Dr Robert Marks here on evolutionary informatics.)
Fig. I.1 (ii): Stonehenge (art -- a complex and specific circular arrangement of transported large stones aligned with the sunrise at Summer Solstice) vs Giant's Causeway (nature -- column-joined basalt rock from a lava flow, complex but unspecified).
Similarly, we may read concerning Stonehenge, from Caroline Alexander in the June 2008 National Geographic:
Blurring past on the A 303 thoroughfare that cuts heedlessly almost across the the monument's very entrance, Stonehenge appears as a cluster of insignificant protrusions on the big, otherwise featureless plain; and yet, even from this profane and glancing vantage, the great-shouldered silhouette is so unmistakably prehistoric that the effect is momentarily of a time-warp cracking onto a lost world . . . its massive lintels are bound to their uprights by mortise- and- tenon joints taken straight from carpentry, an eloquent indication of just how radically new this hybrid monument must have been . . . The people who build Stonehenge had discovered something hitherto unknown, hit upon some truth, turned a corner -- there is no doubt that the carefully placed stones are fraught with meaning.
But what in fact do they mean? Despite countless theories offered over centuries, no one knows. Stonehenge is the most famous relic of prehistory in Europe and one of the best known, most contemplated monuments in the world -- and we have no clear idea what the people who built it used it for. ["If the Stones Could Speak: Searching for the Meaning of Stonehenge," National Geographic, June 2008, p. 36.]
(a) will not be credibly the product of observed blind chance and mechanical forces acting in its context,
(b) will be specifically functional [[it has to do something in particular that depends on having a correct configuration. e.g. a key and a lock or letters in a word], and
(c) will be sufficiently complex that the specific, functional configuration is not credibly the result of happenstance.
The mysterious Voynich Manuscript further underscores this ability to detect design even in absence of knowledge of the particular purpose or meaning:
Fig. I.1 (iii): Page 64 of the mysterious Voynich Manuscript, showing unknown glyphs of unknown meaning (if any) in a string data structure that has statistical patterns reminiscent of natural languages and "word" repetition patterns that may reflect certain East Asian languages. The plant images seem to be by and large composite, but are in effect two-dimensional visual representations and organisation that reflect patterns of plant life.
The case of the mysterious manuscript allows us to contrast of the overwhelmingly likely result of random spatters of ink and paint on similar pages: a mess, not a complex, specific and evidently purposeful organised arrangement of parts forming an evidently functional whole. This instantly allows us to see how evidently functionally specific and purposeful complex organisation of parts to form a whole that would not at all be likely to arise from chance distributions of the same or similar components, is a strong empirical indicator of intelligent configuration of what we are seeing. Even, in cases like this, where we do not know who, or why, or how, or when, or even what the meaning or precise function of the complex, apparently purposeful arrangement is.
A commonly heard counter-point is that in the natural world, we have populations of organisms that through chance variation and differential reproductive success, can adapt and innovate components, step by step across time, creating the appearance of design.
However, this begs a big question: the very system by which self-replication of cells etc works (and thus, reproduction of life forms), is itself based on a precise, highly specific and complex, detailed functional arrangement of parts. So, how did this come about?
That is why, in the often overlooked Ch II of his Natural Theology, Paley raised the thought exercise of considering the implications of a time-keeping, self-replicating watch:
In short, especially because we now know that the way living cells replicate themselves and carry out their functions is crucially dependent on stored information and an intricate arrangement of molecular nano-machines -- machines that read, maintain, transcribe, and implement instructions that are digitally coded in data strings that use the four-state DNA monomer possibilities, G/C/A/T (with G/C/A/U for the transcribed RNA code that makes proteins) -- we have many additional reasons to see that self-replication (and, onwards, reproduction) is itself a case of the apparently purposefully and specifically organised complex functional organisation and associated information [[FSCO/I] that in our experience strongly points to design.
It is worth pausing to excerpt the above linked Wikipedia article on information, as a case of testimony against known ideological interest; so that we may be clear about an absolutely central concept for our discussions and reflections in this course:
Information in its most restricted technical sense is an ordered sequence of symbols that can be interpreted as a message. Information can be recorded as signs, or transmitted as signals. Information is any kind of event that affects the state of a dynamic system. Conceptually, information is the message (utterance or expression) being conveyed. This concept has numerous other meanings in different contexts.  Moreover, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, representation, and especially entropy . . . .With these ideas in mind, we can now see that when Paley writes of an imaginary watch that:
Information is any type of pattern that influences the formation or transformation of other patterns. In this sense, there is no need for a conscious mind to perceive, much less appreciate, the pattern. Consider, for example, DNA. The sequence of nucleotides is a pattern that influences the formation and development of an organism without any need for a conscious mind.
Systems theory at times seems to refer to information in this sense, assuming information does not necessarily involve any conscious mind, and patterns circulating (due to feedback) in the system can be called information. In other words, it can be said that information in this sense is something potentially perceived as representation, though not created or presented for that purpose . . .
in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself -- the thing is conceivable; that it contained within it a mechanism, a system of parts -- a mold, for instance, or a complex adjustment of lathes, baffles, and other tools -- evidently and separately calculated for this purpose . . .. . . he is speaking, as a thought exercise, of an information-rich arrangement of components that in effect carries out a step by step procedure that allows the "parent" watch to replicate itself, so creating a "child" watch. He probably had in mind the sort of automata that were then celebrated, which depend on cam bars to control step by step -- algorithmic -- procedures; and/or possibly punched paper tape or card machines like the Jacquard loom then used in the cloth-making industry. Such devices would process stored information (cams, in analogue form; cards or tape, in digital form) and would control effector machines that could in principle carry out the self-replication he envisioned. He then highlighted that the addition of such a capacity would point to further evidence of design.
So also, it is not good enough to brush Paley's remarks aside because the above quote is found in a work of natural theology; for, even theologians can sometimes think logically and correctly on evidence, or indeed, can carry out properly scientific analysis -- which could even happen in a book on "natural theology." (NB: Newton's Principia, which presented his laws of motion and gravitation, contains a "General Scholium," which is largely on natural theology and related philosophy.)
Nor, will it do to suggest that an intelligent cause for functionally specific complex organisation is not a "scientific" explanation; not if science is concerned to discover the truth about our world and how things happen in it. And, if one suggests that science "must" only explain by the "natural," not the "supernatural," let us observe carefully: this is a rhetorically loaded, distractive strawman that objectors have substituted for what is actually being discussed -- distinguishing (a) natural causes acting by chance and physical necessity and (b) intelligent causes acting by art through deliberate and purposeful choice, on (c) observationally well-warranted, empirically reliable signs.
In short, it is plainly not in order to beg the questions at stake by attaching dismissive, distractive, rhetorically loaded labels; an unfortunately all too common resort of advocates of what we may descriptively label, Lewontinian evolutionary materialism (today's form of what was already an ancient view in Plato's day).
And, we must note, too, on our commonplace experience of the source of FSCO/I, e.g. this very paragraph: the widely observed pattern of the source of organised, functionally specific complex organisation and associated information [FSCO/I] is that -- once we can see the cause directly -- such generally traces to intelligent and purposeful action. The whole Internet, the groaning shelves in the major libraries of our world, and the technological world all around us stand in joint, eloquent testimony to that.
Thus, in a reasonable context, it is at least seriously arguable that functionally specified highly complex organisation is a recognisable, observable and empirically reliable sign of intentionally directed configuration.
That is, of design.
This, we may elaborate slightly, on a remark by leading ID researcher, William Dembski:
. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)
Given some regrettably contentious debates, however, we now need to pause, elaborate and explain that common-sense inference with some primary documentation and step- by- step points that among other things respond to common rebuttal or dismissal attempts:
B: A key concept: complex (and often functionally) specified information
Pardon a bit of a stiff read so early in our discussion, but it is necessary to now "quote from the horse's mouth,"not least so that some regrettably common and misleading strawman caricatures will be exposed for what they are; rhetorical tricks not serious responses on the merits of fact and solid reasoning. [[NB: cf. here for a corrective list of such cases, here for a FAQ, and here, too, for the Creationists' similar complaint.]
As soon as the cite -- I almost wrote "extraction" -- is over, we will then explain it point by point, so please take heart.
Here, then, is Dr William Dembski on the Complex Specified Information CSI concept pioneered by leading Origin of Life researcher Leslie Orgel and others from the 1970's [cf here on below], and which Dembski has built upon:
p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.
I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites:
Wouters, p. 148: "globally in terms of the viability of whole organisms,"
Behe, p. 148: "minimal function of biochemical systems,"
Dawkins, pp. 148 - 9: "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction."
On p. 149, he roughly cites Orgel's famous remark from 1973, which exactly cited reads:
In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . .
And, p. 149, he highlights Paul Davis in The Fifth Miracle: "Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity."] . . .”
This is a bit complex, but it is "from the horse's mouth" from a foundational book and anticipates many misconceptions and opposed talking points that have been used to try to caricature, drown out and dismiss its message. However, since -- as already noted -- it may be a bit overwhelming for the average reader, it needs some boiling down. (Sorry, you will likely need to go through this slowly a few times, it is vital.)
p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
Before we even take up details, we need to pause to underscore the idea that when a set of matching components must be arranged so they can work together to carry out a task or function, this strongly constrains both the choice of individual parts and how they must be arranged to fit together.
A jigsaw puzzle is a good case in point.
So is a car engine -- as anyone who has had to hunt down a specific, hard to find part will know.
So are the statements in a computer program -- there was once a NASA rocket that veered off course on launch and had to be destroyed by triggering the self-destruct because of -- I think it was -- a misplaced comma.
The letters and words in this paragraph are like that too.
That's why (at first, simple level) we can usually quite easily tell the difference between:
A: An orderly, periodic, meaninglessly repetitive sequence: FFFFFFFFFF . . .
B: Aperiodic, evidently random, equally meaningless text: y8ivgdfdihgdftrs . . .
C: Aperiodic, but recognisably meaningfully organised sequences of characters: such as this sequence of letters . . .
In short, to be meaningful or functional, a correct set of core components have to match and must be properly arranged, and while there may be some room to vary, it is not true that just any part popped in in any number of ways can fit in.
The flow-through unidirectional flow lungs we commonly see in birds provide a biological example of this effect (and of the implied challenges to incremental evolution based on small random changes that must provide functional advantages in ecological niches in order to be fixed in a viable population). In these animals, two sets of inflatable sacs are used to pump and pull air through the lungs, which is different from the more familiar bellows type lung such as we have.
As Michael Denton observed in his epochal 1985 Evolution, a Theory in Crisis:
[[T]he structure of the lung in birds and the overall functioning of the respiratory system is quite unique. No lung in any other vertebrate species is known which in any way approaches the avian system. Moreover, it is identical in all essential details in birds as diverse as humming birds, ostriches and hawks . . . .In short, we see here a case of an island of irreducibly complex function, on an organ that is literally vital, and that irreducible complexity would arguably block incremental evolution: intermediates between a bellows lung and a bird's flow-through lung, would be most likely lethally defective -- and would at the very least be arguably dis-advantageous -- and so would be selected against by the very same natural selection that is so often appealed to. For, without the right components -- properly arranged and integrated and with the nervous control system and integrated blood circulatory system and muscular systems -- the bird would most likely die within minutes. In short, the way functionally specific complex organisation leads to islands of function in wider configuration spaces is highly relevant to major biological systems, not just technological ones.
Just how such an utterly different respiratory system could have evolved gradually from the standard vertebrate design is fantastically difficult to envisage, especially bearing in mind that the maintenance of respiratory function is absolutely vital to the life of an organism to the extent that the slightest malfunction leads to death within minutes. Just as the feather cannot function as an organ of flight until the hooks and barbules are coadapted to fit together perfectly, so the avian lung cannot function as an organ of respiration until the parabronchi system which permeates it and the air sac system which guarantees the parabronchi their air supply are both highly developed and able to function together in a perfectly integrated manner . . . [[Evolution, a Theory in Crisis, 1985, pp. 210 - 12.]
As a direct result, in our general experience, and observation, if the functional result is complex and specific enough, the most likely cause is intelligent choice, or design.
This has a consequence. For, this need for choosing and correctly arranging then hooking up correct, matching parts in a specific pattern implicitly rules out the vast majority of possibilities and leads to the concept of islands of function in a vast sea of possible but meaningless and/or non-functional configurations. And, arguably to design -- the commonly observed cause of FSCO/I -- as the best explanation for such cases.
So also, if you would dispute the point that such islands of function dependent on specific clusters of combinations of particular parts exist in seas of non-function, as a typical and even reliably observable pattern, it is necessary to support that claim by observed example. That is, show a case where by blind chance and equally blind mechanical necessity, complex functional organisation emerges from non-functional arrangements, and grows in complexity and degree of successful operation from one step to the next; with particular reference to the rise of new major body plans in life forms. Variations and adaptations within existing body plans do not answer to this. That is, the challenge is to get to shorelines of islands of function in seas of non-function, or else to show that here is a vast continent of function that can be incrementally accessed through a branching tree of life.
On fair comment, despite the various lines of evidence and the many headlined icons of evolution that are put forth to make Darwinian evolutionary mechanisms seem plausible, this challenge has not been met after over 150 years of trying. Consequently, it is equally fair comment to observe that such functionally specific, complex organisation and associated information have only one empirically observed, adequate cause: purposeful, intelligently directed configuration, i.e. design. Therefore, design theorists argue that the world of life points on such empirically reliable signs to design as a key causal factor in the origins of life as we see and experience it. But, in turn, that has to be shown, not simply asserted.
So, in more detailed steps of thought:
i: When we see as an actual event a piece of complex, specified information, E -- e.g. the string of letters forming the text of the above paragraphs (NB: this is a case of definition by key examples and close family resemblance) -- it is usually one instance of a set of other similar possible cases in T, a target zone or more generally a specific (identifiable) zone of interest. Where in turn, we may see that the component parts, p1, p2, . . . pn, may be arranged in a much wider set of possible states or ways, W.
(This gives more precise definition to the idea of an island of meaningful, functional configurations of a string of symbols, sitting in the midst of a vast ocean of possible arrangements of values for the symbols in the string p1-p2-p3- . . . -pn, by far and away mostly gibberish, like: ufiwdhgj. Islands of function in oceans of non-function.)
ii: Next, a bit of background math: if we multiply 2 by itself 500 times, we see that 2^500 = 3.27 * 10^150 in scientific notation; where that is a way to write a large number that is about 3 followed by 150 zeros. (Also, the caret symbol is used to represent raising the 10 to a power, here 150. This is fairly common in computing circles. Sometimes, we might see this as 3.27 e 150 or the like.)
iii: So, if the specifying description of T is such that information cases like E that fit in target zone T within the much wider set of possibilities W, require at least 500 bits of functional, meaningful information to fulfill the specification, the odds of the particular, observed case E being picked up by pure chance from the set of all strings of bits of the same length, W, are about 1 in 10^150. Where,
for 10^80 atoms in the observed cosmos, changing quantum state every Planck time [~ 5* 10^-44 s, rounded down to 10^-45 s], for its credible thermodynamic lifespan [10^25 s, or about 50 mn times the 13.7 BY usually held to have elapsed since the big bang], the number of possible quantum states is ~ 10^150. (NB: there are about 10^20, a hundred billion billion, Planck times in the fastest known nuclear interactions. The fastest chemical interactions would take up about 10^30 Planck times.)
iv: Now, the description of T is not the same as E, i.e. we can say what sort of thing E is, without directly quoting the bit-string that specifies E.
v: The odds of the specifically describable string E being picked by pure chance are about 1 in 10^150, as noted, but of course, I (an intelligent designer) used knowledge, skill and Open Office Writer's properties feature to create it in about three minutes.[[To clarify by example, think of E as being a case of -- a specification, T: "a string of 72 ASCII characters in English that fits this context of discussion" -- say: "this is a text string of seventy-two ascii characters in english languag" -- or the like; and yes, I had to truncate to make it fit. E is one instance of such 72-character strings, the Target Zone T. Many such strings are possible, but the constraints that the strings be (a) in English and (b) responsive to the context of this discussion sharply constrict the list of acceptable strings. Certainly, by comparison to the set of over 10^150 possible strings of 72 ASCII characters, W; by far and away most of which will be gibberish. Practical tests support the common-sense conclusion that if we see such a string, E, it is best explained as being the product of an intelligent writer. Spaces of about 10^50 possibilities [[24 ASCII characters] have been successfully searched by random walks and trial and error, but spaces of 10^150 or more are far harder than that. For comparison, the DNA in the "simplest" observed cases of metabolising and self-replicating cells is from more than 100,000 to 1 million bases, or more than 9.98 * 10^60,205 possibilities. To get to complex new body plans for multicellular life forms requires explaining the further origin of more than 10 million functional DNA bases (more than 8.19 * 10^6,020,599 possibilities), dozens of times over. Scientists have not observed a smooth, step by step trial and error progress from simpler but functional systems, to the observed cases that does not involve intelligent direction, and there is no credible step by step progressive chance-driven trial and error path from say "See Spot run," to the equivalent, a book; or from say a Hello World program, to a word processor or a system controller. The best, abundantly empirically supported explanation for functionally specific, complex information and organisation is design. (NB: For more on inference to the best current explanation and its role in scientific knowledge, cf. here and here. Note also, concerns and limitations as discussed here.)]
vi: At this point, it is common for some to suggest that Dawkins' "Mt Improbable" can be climbed by the easy back-slope, step by step to the peak, as chance variations that give an increase in performance are rewarded with advantages that allow them to become the next stage of progress. And, of course, the "methinks it is like a weasel" example shows how a string of 28 random characters can, after maybe 40 - 60 generations, become the target phrase. For instance, in his best-selling The Blind Watchmaker (1986), pp. 48 ff. Dawkins published the following computer simulation "run":
vii: What is not so commonplace, is to see an admission of the implications of the stunning admission Dawkins had to make even as he presented the Weasel phrase "example" of the power of so-called "cumulative selection," even when the caveats are cited:
I don't know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. [[NB: cf. Wikipedia on the Infinite Monkeys theorem here, to see how unfortunately misleading this example is.] Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence 'Methinks it is like a weasel', and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence? . . . .
It . . . begins by choosing a random sequence of 28 letters ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . . What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed . . . .
Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [[TBW, Ch 3, as cited by Wikipedia, various emphases, highlights and colours added.]
viii: In short, here cumulative selection "works" by rewarding non-functional phrases that happen to be closer to the already known target. This is the very opposite of natural selection on already present difference in function. Dawkins' weasel is not a good model of what evolution is supposed to do.
ix: At most, it illustrates that once we are already on an island of function, chance variation and differences in reproductive success may lead to specialisation to fit particular niches. Which is accepted by all, including modern Young Earth Creationists. And, more sophisticated genetic algorithms have very similar failings. For, (a) they implicitly start within an island of function, that (b) has a predominantly smoothly rising slope that gently leads to peaks of performance so that "hill-climbing" on "warmer/colder" signals will usually get you pointed the right way.
x: In short, GA's do not only start on the shores of an island of function, but also the adaptation targets are implicitly pre-loaded into the program [[even in cases where they are allowed to wiggle about a bit] and so are the "hill-climbing algorithm" means to climb up to them. This point has been highlighted by famed mathematician Gregory Chaitin, in a recent paper, Life as Evolving Software (Sept. 7, 2011):
. . . we present an information-theoretic analysis of Darwin’s theory of evolution, modeled as a hill-climbing algorithm on a ﬁtness landscape. Our space of possible organisms consists ofcomputer programs, which are subjected to random mutations. We study the random walk of increasing ﬁtness made by a single mutating organism. [[p.1]
xi: Plainly, this more sophisticated approach is a model of optimising adaptation by generic hill-climbing, within an island of function; i.e. this is at best a model of micro-evolution within a body plan, not origin of such complex, integrated body plans.
xii: So, while engineers -- classic intelligent designers! -- may well find such algorithms quite useful in some cases of optimisation and system design, they fail the red-herring- strawman test when they are presented as models of microbe to man evolution.
xiii: For, they do not answer to the real challenge posed by the design theorists: how to get to an island of complex function -- i.e. to a new body plan that for first life would require something like 100,000 base pairs of DNA and associated molecular machinery, and for other body plans from trees to bees, bats, birds snakes, worms and us, at least 10 million bases, dozens of times over -- without intelligent direction.
xiv: Instead, we can present a key fact, one that Weasel actually inadvertently demonstrates. That is: in EVERY instance of such a case of CSI, E from such a zone of interest or island of function, T, where we directly know the cause by experience or observation, it originates by similar intelligent design. And, given the long odds involved to get such an E by pure chance -- you cannot have a hill-climbing success amplifier until you first have functional success! -- that is no surprise at all.
(The Internet and the major libraries of the world, together, have billions of successful tests of this claim. On years of experience with suggested counter examples, they are consistently dubious or outright errors, as a rule being illustrations of the very point they were meant to oppose. E.g. the drawings of canals on Mars from 100 years ago, if they were of real canals on Mars would be evidence of a Martian civilisation. Alas, they are inaccurate, and instead are drawings that were intelligently designed to show what the astronomers of that time thought they saw on Mars.)
xv: Why should this be so? Let us consider: in the 10^17 or so seconds on its conventional timeline the 10^57 or so atoms of our solar system (our practical "world") will have gone through maybe as many as some -- oops, corrected 12:06:01 -- 10^117 Planck-time quantum states. (We note, it takes about 10^30 such for the fastest chemical reactions, and many more for the organic chemistry type reactions relevant to so much of cell based life.) But 10^150 possibilities is 10^33 times as much as that, so our solar system could not search out more than a negligible fraction of 10^150 possibilities. Where, we can see that a string of 500 bits has 2^ 500 = 3.27*10^150 possible configurations. For just 500 bits [[~ 72 ASCII characters], on the gamut of our solar system, there is just too much haystack to reasonably expect to find the proverbial lost needle.
xvi: To understand this better, let us work back from how it takes ~ 10^30 Planck time states for the fastest chemical reactions, and use this as a yardstick, i.e. in 10^17 s, our solar system's 10^57 atoms would undergo ~ 10^87 "chemical time" states, about as fast as anything involving atoms could happen. That is 1 in 10^63 of 10^150. So, let's do an illustrative haystack calculation:
Let us take a straw as weighing about a gram and having comparable density to water, so that a haystack weighing 10^63 g [= 10^57 tonnes] would take up as many cubic metres. The stack, assuming a cubical shape, would be 10^19 m across. Now, 1 light year = 9.46 * 10^15 m, or about 1/1,000 of that distance across. If we were to superpose such a notional 1,000 light years on the side haystack on the zone of space centred on the sun, and leave in all stars, planets, comets, rocks, etc, and take a random sample equal in size to one straw, by absolutely overwhelming odds, we would get straw, not star or planet etc. That is, such a sample would be overwhelmingly likely to reflect the bulk of the distribution, not special, isolated zones in it.
xvii: In the case of biology, since 1953 we have known that in the heart of the cell lies the coded information storing molecule DNA. It is worth excerpting the remark made by Sir Francis Crick, one of the discoverers, to his son Michael in a March 19, 1953 letter:
"Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)." [[Emphases added. From about 1961 on, that code has been identified, and is now routinely used in scientific work.]xviii: So, whether we are interested in the origin of life as Dr Eigen was, or in the origin/evolution of new complex structures requiring the equivalent of 500 or more bits of information, we need to look at the CSI information generating hurdle.
xix: Later on (2005), Dembski provided a slightly more complex formula, that we can quote and simplify, showing that it boils down to a "bits from a zone of interest [[in a wider field of possibilities] beyond a reasonable threshold of complexity" metric:
χ = – log2[10^120 ·ϕS(T)·P(T|H)].
--> χ is "chi" and ϕ is "phi"xx: To simplify and build a more "practical" mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally:
Ip = - log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say -- as just one of many examples of a standard result -- Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2:
Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[--> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by
I_k = (def) log_2 1/p_k (13.2-1)
xxi: So, since 10^120 ~ 2^398, we may "boil down" the Dembski metric using some algebra -- i.e. substituting and simplifying the three terms in order -- as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p):
Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = ϕS(T)
Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2That is, chi is a metric of bits from a zone of interest, beyond a threshold of "sufficient complexity to not plausibly be the result of chance," (398 + K2). So,
(a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and(b) as we can define and introduce a dummy variable for specificity, S, where
(c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:Chi = Ip*S – 500, in bits beyond a "complex enough" threshold
- NB: If S = 0, this locks us at Chi = - 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive.
- E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will -- unsurprisingly -- be positive.
- Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0, i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest.
- S goes to 1 when we have objective grounds -- to be explained case by case -- to assign that value.
- That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list.
- A string at random is a list with one member, but if we pick it as a password, it is now a zone with one member. (Where also, a lottery, is a sort of inverse password game where we pay for the privilege; and where the complexity has to be carefully managed to make it winnable. )
- An obvious example of such a zone T, is code symbol strings of a given length that work in a programme or communicate meaningful statements in a language based on its grammar, vocabulary etc. This paragraph is a case in point, which can be contrasted with typical random strings ( . . . 68gsdesnmyw . . . ) or repetitive ones ( . . . ftftftft . . . ); where we can also see by this case how such a case can enfold random and repetitive sub-strings.
- Arguably -- and of course this is hotly disputed -- DNA protein and regulatory codes are another. Design theorists argue that the only observed adequate cause for such is a process of intelligently directed configuration, i.e. of design, so we are justified in taking such a case as a reliable sign of such a cause having been at work. (Thus, the sign then counts as evidence pointing to a perhaps otherwise unknown designer having been at work.)
- So also, to overthrow the design inference, a valid counter example would be needed, a case where blind mechanical necessity and/or blind chance produces such functionally specific, complex information. (Points xiv - xvi above outline why that will be hard indeed to come up with. There are literally billions of cases where FSCI is observed to come from design.)
xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases:
Using Durston’s Fits values -- functionally specific bits -- from his Table 1, to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:
RecA: 242 AA, 832 fits, Chi: 332 bits beyondSecY: 342 AA, 688 fits, Chi: 188 bits beyondCorona S2: 445 AA, 1285 fits, Chi: 785 bits beyond
xxiii: And, this raises the controversial question that biological examples such as DNA -- which in a living cell is much more complex than 500 bits -- may be designed to carry out particular functions in the cell and the wider organism.
xxiv: We may then discuss:
Q1: How then do search algorithms -- such as genetic ones -- so often succeed?
A1: Generally, by intelligently directed injection of active information. That is, information that enables searching guided by an understanding of the search space or the general or specific location of a target. (Also, cf. here. A so-called fitness function which more or less smoothly and reliably points uphill to superior performance, mapped unto a configuration space, implies just such guiding information and allows warmer/colder signals to guide hill-climbing. This or the equivalent, appears in many guises in the field of so-called evolutionary computing. As a rule of thumb, if you see a "blind" search that seemingly delivers an informational free lunch, look for an inadvertent or overlooked injection of active information. [[Cf. here, here & here.]) In a simple example, the children's party game, "treasure hunt," would be next to impossible without a guidance, warmer/colder . . . hot . . . red hot. (Something that gives some sort of warmer/colder message on receiving a query, is an oracle.) The effect of such sets of successive warmer/colder oracular messages or similar devices, is to dramatically reduce the scope of search in a space of possibilities. Intelligently guided, constrained search, in short, can be quite effective. But this is designed, insight guided search, not blind search. From such, we can actually quantify the amount of active information injected, by comparing the reduction in degree of difficulty relative to a truly blind random search as a yardstick. And, we will see the remaining importance of the universal or solar system level probability or plausibility bound [[cf. Dembski and Abel, also discussion at ENV] which in this course will for practical purposes be 500 - 1,000 bits of information -- as we saw above, i.e. these give us thresholds where the search is hard enough that design is a more reasonable approach or explanation. Of course, we need not do so explicitly, we may just look at the amount of active information involved.
Q2: But, once we have a fitness function, all that is needed is to start anywhere and then proceed up the slope of the hill to a peak, no need to consider all of those outlying possibilities all over the place. So, you are making a mountain out of a mole-hill: why all the fuss and feathers over "active information," "oracles" and "guided, constrained search"?
A2: Fitness functions, of course, are a means of guided search, by providing an oracle that points -- generally -- uphill. In addition, they are exactly an example of constrained search: there is function present everywhere in the zone of interest, and it follows a generally well-behaved uphill-pointing pattern. In short, from the start you are constraining the search to an island of function, T, in which neighbouring or nearby locations: Ei, Ej, Ek, etc . . . -- which can be chosen by tossing out a ring of "nearby" random tries -- are apt to go uphill, or get you to another local slope pointing uphill. Also, if you are on the shoreline of function, tosses that have no function will eliminate themselves by being obviously downhill; which means it is going to be hard to island hop from one fairly isolated zone of function to the next. In short, a theory that may explain micro-evolutionary change within an island or cluster of nearby islands, is not simply to be extrapolated to one that needs to account for major differences that have to bridge large differences in configuration and function. This is not going to be materially different if the islands of function and their slopes and peaks of function grow or shrink a bit across time or even move bodily like glorified sand pile barrier islands are wont to, so long as such island of function drifting is gradual. Catastrophic disappearance of such islands, of course, would reflect something like a mass extinction event due to an asteroid impact or the like. Mass extinctions simply do not create new functional body plans, they sweep the life forms exhibiting existing body plans away, wiping the table almost wholly clean, if we are to believe the reports. Where also, the observable islands of function effect starts at the level of the many isolated protein families, that are estimated to be as 1 in 10^64 to 1 in 10^77 or so of the space of Amino Acid sequences. As ID researcher Douglas Axe noted in a 2004 technical paper: "one in 10^64 signature-consistent sequences forms a working domain . . . the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences." So, what has to be reckoned with, is that in general for a sufficiently complex situation to be relevant to FSCO/I [[500 - 1,000 or more structured yes/no questions, to specify configurations, En . . . ], the configuration space of possibilities, W, is as a rule dominated by seas of non-functional gibberish configurations, so that the envisioned easy climb up Mt Improbable is dominated by the prior problem of finding a shoreline of Island Improbable.
Q3: Nonsense! The Tree of Life diagram we all saw in our Biology classes proves that there is a smooth path from the last universal common ancestor [LUCA] to the different body plans and forms, from microbes to Mozart. Where did you get such nonsense from?
A3: Indeed, the tree of life was the only diagram in Darwin's Origin of Species. However, it should be noted that it was a speculative diagram, not one based on a well-documented, observed pattern of gradual, incremental improvements. He hoped that in future decades, investigations of fossils over the world would flesh it out, and that is indeed the impression given in too many Biology textbooks and popular headlines about found "missing links." But, in fact, the typical tree of life imagery:
Fig. G.11c, anticipated: A typical, popular level tree of life model/illustration. (Source.)
Therefore, on balance we have at least one reasonably possible general empirical sign of intelligent design, namely: functionally specific, complex organisation and associated information [[FSCO/I] .. . . is too often presented in a misleading way. First, notice the skipping over of the basic problem that without a root, neither trunks nor branches and twigs are possible. And, getting to a first, self-replicating unicellular life form -- the first universal common ancestor, FUCA -- that uses proteins, DNA, etc through the undirected physics and chemistry of Darwin's warm little electrified pond full of a prebiotic soup or the like, continues to be a major and unsolved problem for evolutionary materialist theorising. Similarly, once we reckon with claims about "convergent evolution" of eyes, flight, whale/bat echolocation "sonar" systems, etc. etc., we begin to see that "everything branches, save when it doesn't." Indeed, we have to reckon with a case where on examining the genome of a kangaroo (the tammar wallaby), it was discovered that "In fact there are great chunks of the [[human] genome sitting right there in the kangaroo genome." The kangaroos are marsupials, not placental mammals, and the fork between the two is held to be 150 million years old. So, Carl Wieland of Creation Ministries incorporated, was fully in his rights to say: "unlike chimps, kangaroos are not supposed to be our ‘close relatives’ . . . . Evolutionists have long proclaimed that apes and people share a high percentage of DNA. Hence their surprise at these findings that ‘Skippy’ has a genetic makeup similar to ours." Next, so soon as one looks at molecular similarities -- technically, homologies (and yes, this is an argument from similarity, i.e analogy in the end) -- instead of those of gross anatomy, we run into many, mutually conflicting "trees." Being allegedly 95 - 98+% Chimp in genetics is one thing, being what, ~ 80% kangaroo or ~ 50% banana or the like, is quite another. That is, we need to look seriously at the obvious alternative from the world of software design: code reuse and adaptation from a software library for the genome. Worse, in fact the consistent record from the field (which is now "almost unmanageably rich" with over 250,000 fossil species, millions of specimens in museums and billions in the known fossil beds), is that we do NOT observe any dominant pattern of origin of body plans by smooth incremental variations of successive fossils. Instead, as Steven Jay Gould famously observed, there are systematic gaps, right from the major categories on down. Indeed, if one looks carefully at the tree illustration above, one will see where the example life forms are: on twigs at the end of branches, not the trunk or where the main branches start. No prizes for guessing why. That is why we should carefully note the following remark made in 2006 by W. Ford Doolittle and Eric Bapteste:
Darwin claimed that a unique inclusively hierarchical pattern of relationships between all organisms based on their similarities and differences [the Tree of Life (TOL)] was a fact of nature, for which evolution, and in particular a branching process of descent with modification, was the explanation. However, there is no independent evidence that the natural order is an inclusive hierarchy, and incorporation of prokaryotes into the TOL is especially problematic. The only data sets from which we might construct a universal hierarchy including prokaryotes, the sequences of genes, often disagree and can seldom be proven to agree. Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true. This is not to say that similarities and differences between organisms are not to be accounted for by evolutionary mechanisms, but descent with modification is only one of these mechanisms, and a single tree-like pattern is not the necessary (or expected) result of their collective operation . . . [[Abstract, "Pattern pluralism and the Tree of Life hypothesis," PNAS February 13, 2007 vol. 104 no. 7 2043-2049.]Q4: But, the evidence shows that natural selection is a capable designer and can create specified complexity. Isn't that what Wicken said to begin with in 1979 when he said that "Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order’ . . ."?
A4: We need to be clear about what natural selection is and does. First, you need a reproducing population, which has inheritable chance variations [[ICV], and some sort of pressure on it from the environment, leading to gradual changes in the populations because of differences in reproductive success [[DRS] . . . i.e. natural selection [[NS] . . . among varieties; achieving descent with modification [[DWM]. Thus, different varieties will have different degrees of success in reproduction: ICV + DRS/NS --> DWM. However, there is a subtlety: while there is a tendency to summarise this process as "natural selection, "this is not accurate. For the NS component actually does not actually ADD anything, it is a short hand way of saying that less "favoured" varieties (Darwin spoke in terms of "races") die off, leaving no descendants. "Selection" is not the real candidate designer. What is being appealed to is that chance variations create new varieties. So, this is the actual supposed source of innovation -- the real candidate designer, not the dying off part. That puts us right back at the problem of finding the shoreline of Island Improbable, by crossing a "sea of non-functional configurations" in which -- as there is no function, there is no basis to choose from: where the evidence suggests a simple cell would have ~ 100,000 to 1 million bases in its genome, but new body plans -- dozens needed for the Cambrian life forms -- requires more like ~ 10 - 100+ millions. So, we cannot simply extrapolate a theory that may relate to incremental changes within an island of function, to the wider situation of origin of functions. Macroevolution is not simply accumulated micro evolution, not in a world of complex, configuration-specific function. (NB: The suggested "edge" of evolution by such mechanisms is often held to be about the level of a taxonomic family, like the cats or the dogs and wolves.)
Q5: The notion of "islands of function" is Creationist nonsense, and so is that of "active information." Why are you trying to inject religion and "God of the gaps" into science?
A5: Unfortunately, this is not a caricature: there is an unfortunate tendency of Darwinist objectors to design theory to appeal to prejudice against theistic worldviews, and to suggest questionable motives, that are used to cloud issues and poison or polarise discussion. But, I am sure that if I were to point out that such Darwinists often have their own anti-theistic ideological agendas and have sought to question-beggingly redefine science as in effect applied atheism or the like, that would often be regarded as out of place. Let us instead stick to the actual merits. Such as, that since intelligent designers are an observed fact of life, to explain that design is a credible or best causal explanation in light of tested reliable signs that are characteristic of design, such as FSCO/I, is not an appeal to gaps. Similarly, to point to ART-ifical causes that leave characteristic traces by contrast with those of chance and/or mechanical necessity, is not to appeal to "the supernatural," but to the action of intelligence on signs that are tested and found to reliably point to it. Nor, is design theory to be equated to Creationism, which can be seen as an attempt to interpret origins evidence in light of what are viewed as accurate record of the Creator. The design inference works back from inductive study of signs of chance, necessity and art, to cases where we did not observe the deep past, but see traces that are closely similar to what we know that the only adequate, observed cause is design. So also, once we see that complex function dependent on many parts that have to be properly arranged and coupled together, sharply constrains the set of functional as opposed to non-functional configurations, the image of "islands of function" is not an unreasonable way to describe the challenge. Where also, we can summarise a specification as a structured list of YES/NO questions that give us a sufficient description of the working configuration. Which in turn gives us a way to understand Kolmogorov-Chaitin complexity or descriptive complexity of a bit-string x, in simple terms: "the length of the shortest program that computes x and halts." This can be turned into a description of zones of interest T that are specified in large spaces of possible configurations, W. If there is a "simple" and relatively short description, D, that allows us to specify T without in effect needing to list and state the configs that are in T, E1, E2, . . En, then T is specific. Where also, if T is such that D describes a configuration-dependent function, T is functionally specific, e.g. strings of ASCII characters in this page form English sentences, and address the theme of origins science in light of intelligent design issues. In the -- huge! -- space of possible ASCII strings of comparable length to this page (or even this paragraph), such clusters of sentences are a vanishingly minute fraction relative to the bulk that will be gibberish. So also, in a world where we often use maps or follow warmer/colder cues to find targets, and where if we were to blindly select a search procedure and match it at random to a space of possibilities, we would be at least as likely to worsen as to improve odds of success relative to a simple blind at-random search of the original space of possibilities, active information that gives us an enhanced chance of success in getting to an island of function is in fact a viable concept.
Given such a possible sign, we can now understand the key error in methodological naturalism. For, the “natural causes only” rule has the effect of implicitly imposing a philosophical -- specifically, materialistic -- conclusion on origins science studies before potentially observable facts can speak scientifically.
So, the focus of the regrettably contentious debate now emerges:
C: The ideological debate emerges: a priori evolutionary materialism and the [[re-]definition of science
(NB: “Truth,” here, is being understood more or less as Aristotle described it 2,300 years ago in Metaphysics 1011b: “. . . to say of what is that it is, and of what is not that it is not, is true.”)
science: a branch of knowledge conducted on objective principles [["objective: external to the mind; actually existing; real"] involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [[Concise Oxford Dictionary, (Oxford: Clarendon Press) 1990 -- and yes, they used the "z." (Emphasis and definition of objectivity from the same source added.)]
scientific method: principles and procedures for the systematic pursuit of knowledge [[= "the body of truth, information and principles acquired by mankind"] involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [[Webster's 7th Collegiate Dictionary, (Springfield, Mass: G & C Merriam), 1965. (Definition of "Knowledge" in the same dictionary inserted, and emphasis added.)]
So, the attempted imposition of the "rule" of methodological naturalism as an absolute criterion of science is in fact a very recent development, and is in part motivated by conflicts over origins science.
Plainly, however, issues of truth-seeking about our world and degree of warrant for conclusions cannot be settled by stipulating such an artificial constraint.
Moreover, a key danger of putting materialistic philosophical blinkers on science is that it can easily lead on to the practical establishment of materialistic ideology under false colours of “truth” or the closest practical approximation we can get to it.
Where that happens, those who object may then easily find themselves tagged and dismissed as pseudo-scientific (or even fraudulent) opponents of progress, knowledge, right and truth; which can then lead on to very unfair or even unjust treatment at the hands of those who wield power. Therefore, if religious censorship of science (as in part happened to Galileo etc.) was dangerous and unacceptable, materialist censorship must also be equally wrong.
[[From: “Billions and Billions of Demons,” NYRB, January 9, 1997. Bold emphasis and notes added.
F/N: The key part of this quote comes after some fairly unfortunate remarks where Mr Lewontin gives the "typical" example -- yes, we can spot a subtext -- of an ill-informed woman who dismissed the Moon landings on the grounds that she could not pick up Dallas on her TV, much less the Moon. This is little more than a subtle appeal to the ill-tempered sneer at those who dissent from the evolutionary materialist "consensus," that they are ignorant, stupid, insane or wicked. For telling counter-instance, Wernher von Braun, the designer of the rocket that took NASA to the Moon, was an evangelical Christian and a Creationist. [[Cf also here, here, here, here, here.]
Similarly, when Lewontin cites "eminent Kant scholar Lewis Beck" as declaring that "anyone who could believe in God could believe in anything," drawing as bottom-line, the inference that "[[t]o appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen," this is a sadly sophomoric distortion. One that fails to understand that, on the Judaeo-Christian theistic view, for miracles to stand out as signs pointing beyond the ordinary, there must first be an ordinary consistently orderly world, one created by the God of order who "sustains all things by his powerful word." Also, for us to be morally accountable to God -- a major theme in theism, the consequences of our actions must be reasonably predictable, i.e. we must live in a consistent, predictably orderly cosmos, one that would be amenable to science.
And, historically, it was specifically that theistic confidence in an orderly cosmos governed by a wise and orderly Creator that gave modern science much of its starting impetus from about 1200 to 1700.
For instance that is why Newton (a biblical theist), in the General Scholium to his famous work Principia, confidently said:
"[[t]his most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful Being . . . It is allowed by all that the Supreme God exists necessarily; and by the same necessity he exists always, and every where. [[--> i.e. he accepts the cosmological argument to God] . . . We know him only by his most wise and excellent contrivances of things, and final cause [[ --> i.e from his designs] . . . Blind metaphysical necessity, which is certainly the same always and every where, could produce no variety of things. [[--> i.e. necessity does not produce contingency]. All that diversity of natural things which we find suited to different times and places could arise from nothing but the ideas and will of a Being necessarily existing. [[--> That is, he implicitly rejects chance, Plato's third alternative and explicitly infers to the Designer of the Cosmos.]"
In such a context of order stamped in at creation and sustained through God's power, for good reason, God may then act into the world in ways that go beyond the ordinary, i.e. miracles are possible but will inevitably be rare and in a context that points to such a higher purpose.
For instance, the chief miracle claim of Christian thought, the resurrection of Jesus with 500+ witnesses is presented in the NT as decisive evidence for the truth of the gospel and authentication of God's plan of redemption.
So, since these contextual remarks have been repeatedly cited by objectors as though they prove the above cite is an out of context distortion that improperly makes Lewontin seem irrational in his claims, they have to be mentioned, and addressed, as some seem to believe that such a disreputable "context" justifies the assertions and attitudes above!)]
Mr Martin Mahner of the Center for Inquiry-Europe, says much the same in his recent Science and Education article, "The role of Metaphysical Naturalism in Science" [:
This paper defends the view that metaphysical naturalism is a constitutive ontological principle of science in that the general empirical methods of science, such as observation, measurement and experiment, and thus the very production of empirical evidence, presuppose a no-supernature principle . . . . Metaphysical or ontological naturalism (henceforth: ON) [["roughly" and "simply"] is the view that all that exists is our lawful spatiotemporal world. Its negation is of course supernaturalism: the view that our lawful spatiotemporal world is not all that exists because there is another non-spatiotemporal world transcending the natural one, whose inhabitants—usually considered to be intentional beings—are not subject to natural laws . . . . Both scientists and science educators keep being challenged by creationists of all shades, who try hard to reintroduce supernaturalist explanations into biology and into all the areas of science that concern the origin of the world in general and of human beings in particular. A major aspect of this debate is the role of ON in science . . . .
ON is not part of a deductive argument in the sense that if we collected all the statements or theories of science and used them as premises, then ON would logically follow. After all, scientific theories do not explicitly talk about anything metaphysical such as the presence or absence of supernatural entities: they simply refer to natural entities and processes only. Therefore, ON rather is a tacit metaphysical supposition of science, an ontological postulate. It is part of a metascientific framework or, if preferred, of the metaparadigm of science that guides the construction and evaluation of theories, and that helps to explain why science works and succeeds in studying and explaining the world. Now this can be interpreted in a weak and a strong sense. In the weak sense, ON is only part of the metaphysical background assumptions of contemporary science as a result of historical contingency; so much so that we could replace ON by its antithesis any time, and science would still work fine. This is the view of the creationists, and, curiously, even of some philosophers of science (e.g., Monton 2009). In the strong sense, ON is essential to science; that is, if it were removed from the metaphysics of science, what we would get would no longer be a science. Conversely, inasmuch as early science accepted supernatural entities as explainers, it was not proper science yet. It is of course this strong sense that I have in mind when I say that science presupposes ON.
The degree to which this article ties the course of reasoning to a strawman contrast, natural vs supernatural, is inadvertently revealing.
By sharp contrast, ever since Plato's day, it has been quite clear that the natural and the artificial can leave reliable empirical traces that can be detected and distinguished using appropriate observational and logical methods of analysis. So, it is very important to notice how this false contrast plays so pivotal a rhetorical and ideological role in polarising origins science; lest that utterly unwelcome Divine Foot steps in the doorway.
Mr Jerry Coyne, a University of Chicago professor in its Department of Ecology and Evolution (and an anti-design advocate, former Christian believer and New Atheist), has pointedly summed up where this view of science points for our civilisation (not just the USA), in a recent, regrettably strident USA Today article:
Religion in America is on the defensive.
Atheist books such as The God Delusion and The End of Faith have, by exposing the dangers of faith and the lack of evidence for the God of Abraham, become best-sellers. Science nibbles at religion from the other end, relentlessly consuming divine explanations and replacing them with material ones. Evolution took a huge bite a while back, and recent work on the brain has shown no evidence for souls, spirits, or any part of our personality or behavior distinct from the lump of jelly in our head. We now know that the universe did not require a creator. Science is even studying the origin of morality. So religious claims retreat into the ever-shrinking gaps not yet filled by science . . . .
Science and faith are fundamentally incompatible, and for precisely the same reason that irrationality and rationality are incompatible. They are different forms of inquiry, with only one, science, equipped to find real truth. And while they may have a dialogue, it's not a constructive one. Science helps religion only by disproving its claims, while religion has nothing to add to science . . . . any progress — not just scientific progress — is easier when we're not yoked to religious dogma. Of course, using reason and evidence won't magically make us all agree, but how much clearer our spectacles would be without the fog of superstition!
[[Column: "Science and religion aren't friends," USA Today, updated October 11, 2010.
These articles thus reveal a pattern of ideological prejudice and anti-theistic bias that in effect seeks to redefine science and censor its methods through what boils down to imposed, censoring a priori, evolutionary materialist atheism.________(NB: It may be useful to compare Casey Luskin's survey of the sides in the emerging controversy here, and to examine Vox Day's rebuttal to the wider New Atheist claims, in his The Irrational Atheist, here, and in print elsewhere. This work will, among other points, highlight that "progress" is not to be confused with improvement, with many ghosts from the last century as key witnesses. Similarly, it is helpful to look at Dan Peterson's remarks here on key contributions of the theistic, Judaeo-Christian worldview and civilisation to the foundation of Modern Science, and their significance for the design controversy. As for the deeper, "lack of evidence for God" theme so characteristic of New Atheists, one might find it helpful at introductory level to read here on the underlying selective hyper-skepticism, here on the philosophical debate on evidence and argument pointing to God, here on the problem of evil [[also raised by Mr Coyne in his article], and here on evidence on the specific evidence and warrant relating to Biblically anchored Christian faith.)]
Then, such advocates -- having locked out evidence and otherwise reasonable possible explanations before the facts are allowed to speak -- now wish to turn around and showcase the confidently announced and headlined results of such ideological domination of scientific, science education and public outreach institutions as "proof" of the correctness of their ideology.
On fair comment: that unfortunate circumstance is little more than a grand exercise in ideological imposition and question-begging by a new "magisterium" dressed up in lab coats.
This ideologising of origins science therefore lends telling force and context to the following declaration in the 2008 version of a well known, long-running US National Academy of Sciences (NAS) booklet:
[[S]cience, along with its methods, explanations and generalizations, must be the sole focus of instruction in science classes to the exclusion of all non-scientific or pseudoscientific methods, explanations, generalizations and products . . . .
Science, by definition, is limited to naturalistic methods and explanations and, as such, is precluded from using supernatural elements in the production of scientific knowledge. [[NSTA, Board of Directors, July 2000. Emphases added.]
University of California law professor Philip Johnson's response to Lewontin in November 1997 is therefore quite relevant:
Worse, Lewontin and others apparently do not realise that the claim, assumption or inference that “science [[is] the only begetter of truth” is not a claim within science but instead a philosophical claim about how we get warranted, credibly true belief, i.e. knowledge. So, they have contradicted themselves: appealing to non-scientific knowledge claims to try to deny the possibility of knowledge beyond science!
D: The significance of complex, functionally specific information/ organisation
In a classic 1979 comment, well known origin of life theorist J S Wicken wrote:
Similarly, as early as 1973, Leslie Orgel, reflecting on Origin of Life, noted:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . .
Thus, the concept of complex specified information -- especially in the form functionally specific complex organisation and associated information [FSCO/I] -- is NOT a creation of design thinkers like William Dembski. Instead, it comes from the natural progress and conceptual challenges faced by origin of life researchers, by the end of the 1970's.[HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [--> this is of course equivalent to the string of yes/no questions required to specify the relevant "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).] One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course, that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here. Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 - 5 in the just linked. Finally, Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 - 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken's remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]
Indeed, by 1982, the famous, Nobel-equivalent prize winning Astrophysicist (and life-long agnostic) Sir Fred Hoyle, went on quite plain public record in an Omni Lecture:
Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ --> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]So, we first see that by the turn of the 1980's, scientists concerned with origin of life and related cosmology recognised that the information-rich organisation of life forms was distinct from simple order and required accurate description and appropriate explanation. To meet those challenges, they identified something special about living forms, CSI and/or FSCO/I. As they did so, they noted that the associated "wiring diagram" based functionality is information-rich, and traces to what Hoyle already was willing to call "intelligent design," and Wicken termed "design or selection." By this last, of course, Wicken plainly hoped to include natural selection.
But the key challenge soon surfaces: what happens if the space to be searched and selected from is so large that islands of functional organisation are hopelessly isolated relative to blind search resources?
For, under such "infinite monkey" circumstances , searches based on random walks from arbitrary initial configurations will be maximally unlikely to find such isolated islands of function. As the crowd-source Wikipedia summarises (in testimony against its ideological interest compelled by the known facts):
Even if the observable universe were filled with monkeys typing from now until the heat death of the universe, their total probability to produce a single instance of Hamlet would still be less than one in 10^183,800. As Kittel and Kroemer put it, “The probability of Hamlet is therefore zero in any operational sense of an event…”, and the statement that the monkeys must eventually succeed “gives a misleading conclusion about very, very large numbers.” This is from their textbook on thermodynamics, the field whose statistical foundations motivated the first known expositions of typing monkeys.
So, once we are dealing with something that is functionally specific and sufficiently complex, trial-and error, blind selection on a random walk is increasingly implausible as an explanation, compared to the routinely observed source of such complex, functional organisation: design. Indeed, beyond a certain point, the odds of trial and error on a random walk succeeding fall to a "practical" zero.
The key to the functional complexity and specificity issue -- and identifying that "certain point" -- lies in Wicken's comparison to wiring diagrams for electrical circuits. For instance, we may compare (a) an outline map of a petroleum refinery with (b) the similar pathways of the living cell's metabolic reaction pathways:
(a) Flow diagram for an Oil Refinery, showing the nodes, interfaces and and arcs “wiring diagram” architecture (Source (full size): Wikipedia, GNU)
(b) Outline of the metabolic reactions network for the living cell (Source (full size): Raymond Cheong, fair use. More detailed chart here. Online Biochem text here. Cf. also quite similar node, interface and arc patterns in gene regulatory networks for a sea urchin embryo here. (Explanation, here.) )
That means that once we specify (i) the nodes, (ii) their interfaces to the arcs, and (iii) the set of arcs so joined to each node, we have specified the “wiring diagram” for the network that forms the basis for its function, especially where such function is not a matter of simple physical resemblance, but results from interactions and cause-effect chains among the nodes.
(This is a familiar exercise for anyone who has had to design, lay out, etch and then populate and solder together an electronics circuit board. [[A computer motherboard is a familiar example of such a circuit board.])
(a) A generic dolphin- like shape can be specified with relatively few nodes and arcs. (Source: Wiki, GNU)
By contrast (and as has been raised as an objection), when we look at other cases like say a fossil cast mould, the close resemblance to the presumed "natural" organism is readily accounted for on non-sculptural observed forces and factors acting in the environment: rapid burial by water or volcanic ash, cementitious rock matrix, solidification of a reasonably accurate mould before decay can act, etc.
(NB: We must observe carefully. While the preserved shape may indeed be functionally specific and complex, fossilisation does not account for the actual origin of the shape of the life-form discovered in the fossil bed.)So, we note that possible signs of design should always be assessed in the context of a methodical explanatory analysis that first addresses the action of known and reasonably possible forces of chance and mechanical necessity. That is, we must always use an "explanatory filter" that first eliminates chance and mechanical necessity as plausible causes of the feature of interest, before turning to the possibility of design.
(NB: This is why a recent attempted objection that the complex pattern of iron filings on a sheet of paper with a magnet under it the complex and specified but not designed, fails: the observed effect is directly explained by chance and mechanical necessity, so the question of design does not arise: intelligently directed contingency is not an issue where blind forces of chance and necessity -- on our observation -- are adequate. As well, the part that is specific is simple, and the part that is complex is random. The same is true for the shape of a snowflake: what is complex -- the elaborate details -- is due to random local atmospheric conditions, and what is simply specified -- the six-sided shape -- is due to forces of necessity based on the structure of the water molecule. The same holds if we were to say drop a double-handful of dice to a table: the simple law of necessity that drives the falling is distinct from the random complexity of the values the dice would take, at random. But, if we were to come back an hour later and see the same set of dice now all set to read 1 or some other recognisable pattern, we would instantly believe that they had been set to read like that by intelligent, deliberate action. )In the case of the fossil mould or the like, we need to again underscore the key subtlety: by the very nature of such a mould, the mould is not the true source of the shape information it captures and transmits (with some loss of detail).
This points straight back to the question of where the complex, functional biological information that shaped the body plan of the fossilised creature originally came from. In turn, this leads to the question of the root source of the genetic and regulatory information and organisation in the underlying self-replicating living cells in the organism that gave its shape to the fossil mould. However, since that is in the remote, unobserved past, we need to first ground our thinking by identifying empirically reliable causal patterns that we may directly observe in the present. Once that is done, we may then reasonably argue that similar phenomena in the past are best explained on the reliable pattern.
There is a second subtlety, too (as Newton emphasised in his 1704 work on experimental scientific methods, Opticks, Query 31): as is true of all significant findings of science, the conclusions we may reach are provisional, i.e. subject to clarification and correction in light of further evidence and analysis.
As Denton describes such a cell:
[[Denton, Michael, Evolution: A Theory in Crisis, Adler, 1986, pp. 327 – 331. This work is a classic that is still well worth reading. Emphases added. (NB: The 2009 work by Stephen Meyer of Discovery Institute, Signature in the Cell, brings this classic argument up to date. The main thesis of the book is that: "The universe is comprised of matter, energy, and the information that gives order [[better: functional organisation] to matter and energy, thereby bringing life into being. In the cell, information is carried by DNA, which functions like a software program. The signature in the cell is that of the master programmer of life." Given the sharp response that has provoked, the onward e-book responses to attempted rebuttals, Signature of Controversy, would also be excellent, but sobering and sometimes saddening, reading.) ]
So (as the above video points out) , we have to look at the evidence we have today, and then seek to explain the body of evidence based on what we know about (1) the forces and laws of nature, (2) chance circumstances, and (3) how intelligent designers act.
(Yes, how intelligent designers are observed to act. We are undeniably intelligent and designing, on pain of absurdity. Nor, can we explain away the capacity to design on our simply being embodied and/or having sufficiently complex brains. Just consider: the computer you may be viewing this on was the product of many teams of highly intelligent and knowledgeable designers of its hardware and software, and before them, generations of researchers in many related fields of science, engineering and mathematics.)
In the case of the living cell, the action of mRNA and tRNA in the Ribosome during protein synthesis is especially striking:
Fig. I.5 (a): Protein translation in the cell, showing mRNA acting as a discrete- state -- i.e. digitally -- coded tape that allows successive tRNA molecules carrying attached amino acids to chain a protein, step by step. (Source: Wiki, public domain.)
Fig. I.5(b): Vuk Nikolic's fairly detailed video animation of the protein translation process (Source: VN, Vimeo)
(Why? Not because we independently know that through direct observation, but because appealing to (3) on origins might “allow a Divine Foot in the door.” Or, more subtly, “the supernatural is not subject to repeatable observation and empirical testing, nor is it predictable – persons make different decisions and act differently in similar situations; so we “can't” have the supernatural intruding into our scientific understanding of our origins.” Or, even: we pay scientists to look for the natural causes of natural events [["with great success" and "for centuries"] and pay teachers to teach science, not "another subject." However, just to try to define just what words or phrases like “natural,” “supernatural” and “natural causes” mean on the assumption that methodological naturalism defines science, without improperly assuming or asserting philosophical materialism -- as Lewontin advocated and as Johnson rightly rebuked -- is quite a challenge. (Cf. Remarks and exchanges here.))
Today, for the sake of argument only, let us make two assumptions:
1. First, let us assume that the design hypothesis is correct, i.e., that living things appear to be designed for a purpose because they were in fact designed for a purpose.
2. Second, let us assume [[presumably, by the "rule" of methodological naturalism] that the design hypothesis is not a scientific hypothesis, which means that ID proponents are not engaged in a scientific endeavor, or, as our opponents so often say, “ID is not science.”
From these assumptions, the following conclusion follows: If the design hypothesis is correct and at the same time the design hypothesis may not be advanced as a valid scientific hypothesis, then the structure of science prohibits it from discovering the truth about the origin of living things . . . .
No one can know with absolute certainty that the design hypothesis is false. It follows from the absence of absolute knowledge, that each person should be willing to accept at least the possibility that the design hypothesis is correct, however remote that possibility might seem to him. Once a person makes that concession, as every honest person must, the game is up. The question is no longer whether ID is science or non-science. The question is whether the search for the truth of the matter about the natural world should be structurally biased against a possibly true hypothesis. [["What if it's true?" Uncommon Descent, Aug. 6, 2010. (Emphasis added.)]For, we did not – and cannot -- directly observe the remote past, so origins science theories are in the end attempted “historical” reconstructions of what we think the past may have been like. Such reconstructions are based on investigating which of the possible explanations seems "best" to us on balance in light of the evidence. However, to censor out a class of possible explanations ahead of time through imposing materialism plainly undermines the integrity of this abductive method.
A reasonable way to begin is with an expanded view of the generic scientific method of investigation as we look at different aspects of observable objects or events:
It is important not to miss the key point of the result you see illustrated in figure 17.3: evolution is an observation, not a conclusion. Because the dating of the samples is independent of what the samples are like, successive change through time is a data statement. While the statement that evolution is the result of natural selection is a theory advanced by Darwin, the statement that macroevolution has occurred is a factual observation.Hunter aptly rebuts:
A sequence of fossils is an observation of macroevolution? It would be difficult to imagine a more misleading statement than this. And it is not as though this was an unintended mistake that just happened to elude the 100+ reviewers. Johnson and Losos went out of their way to make and elaborate this message, and the army of evolutionist reviewers all nodded their heads. [[Emphasis added.]For, first, events claimed to have happened 50 - 35 million years in the deep past are simply not open to direct observation; as, we were not there to see for ourselves, nor do we have generally acceptable and credible record of the true facts from those who were.
Notwithstanding, the claim is being made that the dating is "independent" of the reconstructions and artistic photo-paintings made based on fossils recovered in certain layers of rocks. [[This is not quite true, as Part II below (on cosmology and timelines) will show, and as this notorious case demonstrates; i.e. to a sometimes worrying extent, dates of rocks and fossils are mutually dependent and premised on the generally accepted picture of evolution across time, even when radioactivity methods are used.]
Then, on the strength of this "independence," the young student looking for instruction in Biology is told by professors, publishers and reviewers, that macroevolution -- presumably meant to imply the grand microbes to man evolutionary claim (not just the illustrated claim of a sequence of fossil species) -- is an observed "fact." So, a tower of assumption-laden inferences is now projected onto the past, from the present observations of fossilised animal remains in layers of rock and probably radioactive dating estimates on associated volcanic rocks. Finally, it is presented to us as if it were a direct factual observation of the deep past; and, the subtext is plain: only fools dispute facts.
For, the key danger to such tactics is that they can actually close our minds to the true facts when we meet them.
This is because, if we confuse a speculative and possibly error-prone tower of inferences for a "fact," we will then be inclined to think the real facts that contradict such errors are false, simply because they cut across what we have been led to perceive as "fact." This danger becomes even more evident when we meet the sort of attitude expressed in a Wikipedia discussion on the claimed "fact" of evolution:
. . . When scientists say "evolution is a fact" they are using one of two meanings of the word "fact". One meaning is empirical, and when this is what scientists mean, then "evolution" is used to mean observed changes in allele frequencies or traits of a population over successive generations.
Another way "fact" is used is to refer to a certain kind of theory, one that has been so powerful and productive for such a long time that it is universally accepted by scientists. When scientists say evolution is a fact in this sense, they mean it is a fact that all living organisms have descended from a common ancestor (or ancestral gene pool)  even though this cannot be directly observed. [["Evolution as theory and fact," coloured emphasis added. Acc: Aug. 7, 2010.]
The first use simply describes observed minor variations in life forms that are sometimes called micro-evolution, which is indeed a non-controversial fact. The second usage, however, is plainly loaded with precisely the sort of confusion of towers of inference for actual observed objective facts that can potentially close our minds to the truth. In addition, it appeals to the naked authority of the particular school of thought and philosophy that dominates a given day. Indeed, by improperly using the term "universally," it actually tries to disenfranchise qualified but dissenting scientists.
Therefore, we must always be very careful indeed to distinguish actual credible facts of direct observation from inferences built on them, and we must always be open to the possibility that what we think are facts -- especially on matters that we cannot directly observe -- may just possibly embed an error or two. That immediately means that a critical survey like this can play an important role in building a balanced understanding of current thought on origins.
In addition, the use of such tactics in the name of education also means that in this critical survey course we need to address not only issues in Biology, but also in linked Geology, Geophysics and Geochemistry; as well as in related Astrophysics and Cosmology. For, we need to develop sufficient familiarity with the scientific, knowledge, and worldview level issues associated with claims like that to critically assess them for ourselves and draw our own conclusions as informed citizens, as educators and as students.
F: Focus of the IOSE course
Therefore, this independent origins science course – bearing in mind various perspectives and controversies -- will seek to clarify and discuss the decisive scientific facts, theories, ideas, issues and alternatives on origins, regarding:
Such questions need solid, fair answers of responsible length. However, a brief initial “bullet point” step- by- step summary is also helpful:
Fig. I.7 (a): Is the observed Cosmos fine-tuned for life? (Source: Adapted from 2001principle.net, with corrections.)
Fig. I.7(b): Dr Michael Strauss discussing the significance of cosmological fine tuning. (NB: Cf. a more detailed discussion of the fine tuning and possible worldview level implications here.)
Fig. I.7(c): A critical discussion of Hawking-style (and broadly similar) multiverse speculations, by Spitzer and Gordon.
Fig. I.7(d): An excerpt on the privileged planet coincidences [[ Privileged Planet at Amazon]
Would you not say to yourself, `Some super-calculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule?' Of course you would. . . . A common sense interpretation of the facts suggests that a superintellect has monkeyed with the physics, as well as chemistry and biology, and that there are no blind forces worth speaking about in nature. I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [["The Universe: Past and Present Reflections." Engineering and Science, November, 1981. pp. 8–12]
Fig. I.8: Yockey's representation of the DNA processing system as a Shannon-style information communication system that transfers genetic messages from DNA to proteins, with possibility of noise. [[NB: This shows how DNA is not just "analogous" to a code but is a part of a digitally coded information system; complete with possibilities for noise.] (Source: Yockey, Hubert, Information Theory, Evolution, and the Origin of Life, Cambridge University Press, 2005. Fair use. )
p --> For further instance, NewScientist was compelled to observe in a recent article on Mycoplasma pneumoniae , one of the simplest known bacteria, that “[[t]he inner workings of a supposedly simple bacterial cell have turned out to be much more sophisticated than expected.”
[[NB: New Scientist comments: “This "CT scan" shows the "bodily" positions of molecular motors vital to a Mycoplasma pneumoniae cell [[thus, arguably, illustrating a case of irreducible complexity]. They include ribosomes (yellow) for making proteins, chaperonins for folding them (red), RNA polymerases for "reading" genes and pyruvate dehydrogenase enzymes for producing energy ” Luis Serrano of the European Molecular Biology Laboratory (EMBL) observed that "At all three levels [[as investigated: (i) “the RNA molecules, or transcripts, produced from its DNA,” (ii) “the metabolic reactions that occurred in it,” and (iii) “every multi-protein complex the bacterium produced”], we found M. pneumoniae was more complex than we expected."]
s --> But more than that, if all phenomena in the cosmos are shaped and controlled in the end by blind chance and necessity, however mediated then the credibility of reasoning irretrievably breaks down.
[[Thus, they hold that t]he Gods exist not by nature, but by art, and by the laws of states, which are different in different places, according to the agreement of those who make them; and that the honourable is one thing by nature and another thing by law, and that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [[Relativism, too, is not new; complete with its radical amorality rooted in a worldview that has no foundational IS that can ground OUGHT. (Cf. here for Locke's views and sources on a very different base for grounding liberty as opposed to license and resulting anarchistic "every man does what is right in his own eyes" chaos leading to tyranny. )] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might [[ Evolutionary materialism leads to the promotion of amorality], and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions [[Evolutionary materialism-motivated amorality "naturally" leads to continual contentions and power struggles; cf. dramatisation here], these philosophers inviting them to lead a true life according to nature, that is, to live in real dominion over others [[such amoral factions, if they gain power, "naturally" tend towards ruthless tyranny], and not in legal subjection to them.
Nature is not cruel, only pitilessly indifferent. This lesson is one of the hardest for humans to learn. We cannot accept that things might be neither good nor evil, neither cruel nor kind, but simply callous: indifferent to all suffering, lacking all purpose.
We humans have purpose on the brain. We find it difficult to look at anything without wondering what it is “for,” what the motive for it or the purpose behind it might be. The desire to see purpose everywhere is natural in an animal that lives surrounded by machines, works of art, tools and other designed artifacts – an animal whose waking thoughts are dominated by its own goals and aims . . . .
Somewhere between windscreen wipers and tin openers on the one hand, and rocks and the universe on the other, lie living creatures. Living bodies and their organs are objects that, unlike rocks, seem to have purpose written all over them . . . . The true process that has endowed wings, eyes, beaks, nesting instincts and everything else about life with the strong illusion of purposeful design is now well understood.
It is Darwinian natural selection . . . . The true utility function of life, that which is being maximized in the natural world, is DNA survival. But DNA is not floating free; it is locked up in living bodies, and it has to make the most of the levers of power at its disposal. Genetic sequences that find themselves in cheetah bodies maximize their survival by causing those bodies to kill gazelles. Sequences that find themselves in gazelle bodies increase their chance of survival by promoting opposite ends. But the same utility function-the survival of DNA-explains the “purpose” of both the cheetah [--> i.e. predator] and the gazelle [--> i.e. prey] . . . .
The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are being slowly devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst and disease. It must be so. If there is ever a time of plenty, this very fact will automatically lead to an increase in population until the natural state of starvation and misery is restored.
In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won’t find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil and no good, nothing but pitiless indifference . . . . DNA neither cares nor knows. DNA just is. And we dance to its music. [[ “God’s Utility Function,” Sci. Am. Aug 1995, pp. 80 - 85.]
x --> In critiquing Provine's remarks from a Judaeo-Christian perspective, Kyle Butt brings out a significant implication:[[NB: This article raises the issue of the problem of evil, here emphasising the problem of natural evil; probably the strongest argument in the atheists' arsenal, but one that only works by implicitly assuming that good and evil, thus moral obligation, are real; while ducking the implication that the only valid worldview in a world in which OUGHT is real, is one that has a foundational IS that adequately grounds ought. And materialism -- scientific atheism today, has no such is. So, the objecting atheist actually has no grounds to stand on to make the argument; his argument, in the end is self-defeating, and so the proper response is to insist that such an atheist face that issue before proceeding further. (Cf here for a preliminary discussion of the problem of evil from a Christian perspective.)]