Thursday, June 10, 2010

Introduction and Summary




FOCUS: The scientific study of our origins, and that of our world, are both highly important and quite controversial. The recent imposition of a priori materialism on origins science, through implications of methodological naturalism is a key aspect of that. The recent rise of the design inference on empirically reliable signs of intentionally directed configuration (i.e. design)  is another. So, in this summary for the IOSE course these pivotal issues are documented and explored in some detail. Then, a step by step summary of the main topics of the full course is presented, for those who want to see the overall structure of the course in a nutshell. Some points for discussion give a flavour for what is to follow in detailed units, and serves as a stimulus for one's own independent thought.
___________________


TOPICS:

INTRODUCTION & SUMMARY

A: Methodological Naturalism (& Evolutionary Materialism) vs. empirical signs of Nature vs. Art

--> William Dembski on what Design theorists do to make a scientific inference to design


--> Stephen C. Meyer's summary of the positive design theory case, on DNA in the cell as an example of functionally specific, complex coded information 

--> On Newton's rules of reasoning (vs. a priori naturalistic impositions)

--> Distinguishing nature and art on empirical signs

B: A key concept: complex (and often functionally) specified information

-->  On islands of function


--> Dembski, in NFL, defines CSI

--> Dawkins' Weasel, and kin

--> Of needles, haystacks & blind search

--> A simplified form of the 2005 Dembski Chi metric for CSI

--> Q & A on active info, guided search, Kolmogorov complexity & specification, universal probability/plausibility bounds, etc etc


C: The ideological debate emerges: a priori evolutionary materialism and the [[re-]definition of science

--> Lewontin's a priori evolutionary materialism

--> Is Lewontin JUSTIFIED in such a priori materialism?

--> The US National Academy of Science and NSTA materialist redefinitions of science

--> Philip Johnson's rebuttal to a priori materialism in origins science

D: The significance of complex, functionally specific information/ organisation

--> Denton's description of the cell as a world of supreme technology

E: Scientific Methods and Explanatory Filters

--> Is evolution a "fact"?

F: Focus of the IOSE course


G: A step-by-step overview of the IOSE course

--> cosmology

--> life on earth

--> mind, man, conscience and community

--> Plato's warning

REMARKS

FOR DISCUSSION

NEXT:
Significance of Origins Science ideas and issues










































































INTRODUCTION AND SUMMARY: The scientific study of our origins helps us probe the roots of our existence. This gives it great importance. 
 
So, ever since scientists began to investigate origins in the 1700s and 1800s science has been a key part of how we try to learn the truth about ourselves, how we came to be and our place in our world, “from hydrogen to humans.” This work is based on scientific methods: carefully observed evidence, reasoned analysis and informed discussion, projecting from the observed patterns of the present to try to plausibly reconstruct our roots in the deep past, i.e. beyond historical records. 

Origins science is therefore a highly important -- albeit sometimes controversial -- field of study and research, especially on the live-wire topic of evolution. So, let us survey, starting with:

A: Methodological Naturalism (& Evolutionary Materialism) vs. empirical signs of Nature vs. Art:

In recent decades, some educators, public policy advocates -- and, most importantly, some scientists -- through adopting methodological naturalism, have thought and taught that science can only work properly if it is understood and defined in terms of a search for “natural causes” or "material causes." In the words of Harvard Biology professor Richard Lewontin (to be further discussed below):
. . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated . . . [["Billions and billions of demons," NYRB, Jan 1997.]

That is, before facts are allowed to speak for themselves, such evolutionary materialist thinkers hold that the only acceptable origins science theories are those that by design “must” fit in with the view that undirected blindly mechanical forces of nature and chance circumstances acting on matter and energy in one form or another, triggered purposeless changes and developments across time and are adequate to explain the world of life. In many cases, they may even assert that anything that questions such a view or its assumptions "is not science."


Which, to such minds, is close to saying: nonsense.

But, it should be clear that some very big assumptions are being made; assumptions that -- on their face -- could easily bias or even warp attempts to sincerely find out what really happened in the deep past of origins.  


And, is it not reasonable that science should seek to discover and provide good observational evidence and objectively unbiased explanations about what really happens in our world -- and (so far as that is possible) about what really happened in the remote past of origins, without a priori ideological blinkers? 

This is in effect what ID thinkers and scientists have argued in recent years. 

In simple terms, noted ID Scientist William Dembski, argues:
We know from experience that intelligent agents build intricate machines that need all their parts to function [[--> i.e. he is specifically discussing "irreducibly complex" objects, structures or processes for which there is a core group of parts all of which must be present and properly arranged for the entity to function (cf. here, here and here)], things like mousetraps and motors. And we know how they do it -- by looking to a future goal and then purposefully assembling a set of parts until they’re a working whole. Intelligent agents, in fact, are the one and only type of thing we have ever seen doing this sort of thing from scratch. In other words, our common experience provides positive evidence of only one kind of cause able to assemble such machines. It’s not electricity. It’s not magnetism. It’s not natural selection working on random variation. It’s not any purely mindless process. It’s intelligence  . . . . 
When we attribute intelligent design to complex biological machines that need all of their parts to work, we’re doing what historical scientists do generally. Think of it as a three-step process: (1) locate a type of cause active in the present that routinely produces the thing in question; (2) make a thorough search to determine if it is the only known cause of this type of thing; and (3) if it is, offer it as the best explanation for the thing in question. 

[[William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, pp. 20-21, 53 (InterVarsity Press, 2010). HT, CL of ENV & DI.]
  Philosopher of Science Stephen Meyer similarly argues the same point in more detail in his response to a hostile review of his key 2009 Design Theory book, Signature in the Cell:
The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . .  In order to [[scientifically refute this inductive conclusion]  Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . . . 
He then gives even more details, with particular reference to the origin of cell-based life:
The central problem facing origin-of-life researchers is neither the synthesis of pre-biotic building blocks (which Sutherland’s work addresses) or even the synthesis of a self-replicating RNA molecule (the plausibility of which Joyce and Tracey’s work seeks to establish, albeit unsuccessfully . . . [[Meyer gives details in the linked page]). Instead, the fundamental problem is getting the chemical building blocks to arrange themselves into the large information-bearing molecules (whether DNA or RNA) . . . . 
For nearly sixty years origin-of-life researchers have attempted to use pre-biotic simulation experiments to find a plausible pathway by which life might have arisen from simpler non-living chemicals, thereby providing support for chemical evolutionary theory.  While these experiments have occasionally yielded interesting insights about the conditions under which certain reactions will or won’t produce the various small molecule constituents of larger bio-macromolecules, they have shed no light on how the information in these larger macromolecules (particularly in DNA and RNA) could have arisen.  Nor should this be surprising in light of what we have long known about the chemical structure of DNA and RNA.  As I show in Signature in the Cell, the chemical structures of DNA and RNA allow them to store information precisely because chemical affinities between their smaller molecular subunits do not determine the specific arrangements of the bases in the DNA and RNA molecules.  Instead, the same type of chemical bond (an N-glycosidic bond) forms between the backbone and each one of the four bases, allowing any one of the bases to attach at any site along the backbone, in turn allowing an innumerable variety of different sequences.  This chemical indeterminacy is precisely what permits DNA and RNA to function as information carriers.  It also dooms attempts to account for the origin of the information—the precise sequencing of the bases—in these molecules as the result of deterministic chemical interactions . . . . 
[[W]e now have a wealth of experience showing that what I call specified or functional information (especially if encoded in digital form) does not arise from purely physical or chemical antecedents [[--> i.e. by blind, undirected forces of chance and necessity].  Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization.  On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information.  That cause is intelligence or conscious rational deliberation.  As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process.  Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence.  This conclusion is not based upon what we don’t know.  It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . . . 
[[In conclusion,] it needs to be noted that the [[now commonly asserted and imposed limiting rule on scientific knowledge, the] principle of methodological naturalism [[ that scientific explanations may only infer to "natural[[istic] causes"] is an arbitrary philosophical assumption, not a principle that can be established or justified by scientific observation itself.  Others of us, having long ago seen the pattern in pre-biotic simulation experiments, to say nothing of the clear testimony of thousands of years of human experience, have decided to move on.  We see in the information-rich structure of life a clear indicator of intelligent activity and have begun to investigate living systems accordingly. If, by Professor Falk’s definition, that makes us philosophers rather than scientists, then so be it.  But I suspect that the shoe is now, instead, firmly on the other foot. [[Meyer, Stephen C: Response to Darrel Falk’s Review of Signature in the Cell, SITC web site, 2009. (Emphases and parentheses added.)]

Thus, in the context of a pivotal example -- the functionally specific, complex information stored in the well-known genetic code -- we see laid out the inductive logic and empirical basis for design theory as a legitimate (albeit obviously controversial) scientific investigation and conclusion.

It is worth the pause to (courtesy the US NIH) lay out a diagram of what is at stake here:


Fig I.0: DNA as a stored code exhibiting functionally specific complex digital information (HT: NIH)


In this context -- to understand the kind of scientific reasoning involved and its history, it is also worth pausing to excerpt Newton's Rules of [[Inductive] Reasoning in [[Natural] Philosophy which he used to introduce the Universal Law of Gravitation. In turn, this -- then controversial (action at a distance? why? . . . ) -- law was in effect generalised from the falling of apples on Earth and the deduced rule that also explained the orbital force of the Moon, and thence Kepler's mathematically stated empirical laws of planetary motion. 

So, Newton needed to render plausible how he projected universality
Rule I [[--> adequacy and simplicity]
We are to admit no more causes of natural things than such as are both true [[--> it is probably best to take this liberally as meaning "potentially and plausibly true"] and sufficient to explain their appearances.
To this purpose the philosophers say that Nature does nothing in vain, and more is in vain when less will serve; for Nature is pleased with simplicity, and affects not the pomp of superfluous causes.
Rule II [[--> uniformity of causes: "like forces cause like effects"]
Therefore to the same natural effects we must, as far as possible, assign the same causes.
As to respiration in a man and in a beast; the descent of stones in Europe and in America; the light of our culinary fire and of the sun; the reflection of light in the earth, and in the planets.
Rule III [[--> confident universality]
The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.

For since the qualities of bodies are only known to us by experiments, we are to hold for universal all such as universally agree with experiments; and such as are not liable to diminution can never be quite taken away. We are certainly not to relinquish the evidence of experiments for the sake of dreams and vain fictions of our own devising; nor are we to recede from the analogy of Nature, which is wont to be simple, and always consonant to [398/399] itself . . . .  
Rule IV [[--> provisionality and primacy of induction]

In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.

This rule we must follow, that the arguments of induction may not be evaded by [[speculative] hypotheses. 
 In effect Newton advocated for provisional, empirically tested, reliable and adequate inductive principles resting on "simple" summaries or explanatory constructs.  These were to be as accurate to reality as we experience it, as we can get it, i.e. a scientific theory seeks to be true to our world, provisional though it must be. They rest on induction from patterns of observed phenomena and through Rule II -- on "like causes like" -- were to be confidently projected to cases where we do not observe directly, subject to correction on further observations, not impositions of speculative metaphysical notions. Also, this requires that we show from observation that a proposed cause can actually produce the relevant effects. (Thus, Sir Charles Lyell proposed to explain geological phenomena in the deep past "by reference to causes NOW IN OPERATION.")

This leads to the way that -- through a form of induction known as abductive inference to best explanation (cf. here for basics, and here for more complex details and issues) --
(a) certain explanatory hypotheses, models and theories [[E/M/T] can be  identified as "best" current explanations of bodies of observed facts F1, F2, . . . Fn, and

(b) such E/M/T's can then be onwards correlated with bodies of accepted theories in science [[BOATs], leading to the onward development of science.
An informational graphic will help us clarify:



Since inductive reasoning -- roughly: arguments where givens or observed "facts" in evidence render a conclusion significantly more likely to be true [[cf. here and here] -- leading to provisionally inferred general patterns itself is now being deemed suspect in some quarters, it may help to note as follows from Avi Sion, on what he descriptively calls the principle of universality:
We might . . . ask – can there be a world without any ‘uniformities’? A world of universal difference, with no two things the same in any respect whatever is unthinkable. Why? Because to so characterize the world would itself be an appeal to uniformity. A uniformly non-uniform world is a contradiction in terms.
Therefore, we must admit some uniformity to exist in the world.
The world need not be uniform throughout, for the principle of uniformity to apply. It suffices that some uniformity occurs.
Given this degree of uniformity, however small, we logically can and must talk about generalization and particularization. There happens to be some ‘uniformities’; therefore, we have to take them into consideration in our construction of knowledge. The principle of uniformity is thus not a wacky notion, as Hume seems to imply . . . .
The uniformity principle is not a generalization of generalization; it is not a statement guilty of circularity, as some critics contend. So what is it? Simply this: when we come upon some uniformity in our experience or thought, we may readily assume that uniformity to continue onward until and unless we find some evidence or reason that sets a limit to it. Why? Because in such case the assumption of uniformity already has a basis, whereas the contrary assumption of difference has not or not yet been found to have any. The generalization has some justification; whereas the particularization has none at all, it is an arbitrary assertion.
It cannot be argued that we may equally assume the contrary assumption (i.e. the proposed particularization) on the basis that in past events of induction other contrary assumptions have turned out to be true (i.e. for which experiences or reasons have indeed been adduced) – for the simple reason that such a generalization from diverse past inductions is formally excluded by the fact that we know of many cases [[of inferred generalisations; try: "we can make mistakes in inductive generalisation . . . "] that have not been found worthy of particularization to date . . . .
If we follow such sober inductive logic, devoid of irrational acts, we can be confident to have the best available conclusions in the present context of knowledge. We generalize when the facts allow it, and particularize when the facts necessitate it. We do not particularize out of context, or generalize against the evidence or when this would give rise to contradictions . . .[[Logical and Spiritual Reflections, BK I Hume's Problems with Induction, Ch 2 The principle of induction.]

In all of this inductive analysis, we are to be responsible and humble, as Newton also pointed out in his 1704 Opticks, Query 31, where he laid out the generic scientific method much as we are now taught it in school:
[[Scientific] Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths . . . [[.]
And although the arguing from Experiments and Observations by Induction be no Demonstration of general Conclusions; yet it is the best way of arguing which the Nature of Things admits of, and may be looked upon as so much the stronger, by how much the Induction is more general. And if no Exception occur from Phaenomena, the Conclusion may be pronounced generally.
But if at any time afterwards any Exception shall occur from Experiments, it may then begin to be pronounced with such Exceptions as occur. 
By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. [[paragraph breaks added.]

These ideas, broadly speaking, have so triumphed that everyone gives them at least formal acknowledgement. By that light, we may summarise "science at its best" as:
(a) The unfettered — but ethically and intellectually responsible — observational evidence-led pursuit of the truth about our world (i.e. an accurate and reliable description and explanation of it), that is
(b) based on observation, experiment, measurement, and inductive inference of laws and explanatory models or theories, where also it
(c) progresses in light of the mutually respectful discussion among the informed, that works to build (and sometimes to reform or replace) a reasonable consensus, i.e. the paradigm.
Such, then, is the "ideal." (Cf. also Lakatos on science vs. pseudoscience.)

But in our day, through the claim that it is a mere methodological constraint, scientific explanation has been improperly subjected to the constraint that scientific explanations (especially on origins) must be naturalistic, i.e. must in the end trace to blind forces of chance and necessity acting on matter and energy across space and time. That is, it is often imposed, assumed or implied that intelligence must be derivative of such material phenomena.

(With this, Newton, as his General Scolium to Principia and his Query 31 to Opticks indicate, would sharply beg to differ. And, ironically, that implied -- or even openly asserted "must" -- raises the ironic question as to whether such "Lewontinian" evolutionary materialist scientism, even while dressed in a lab coat and proudly flying the banners of science and progress,  is drifting into being an ideological imposition on science. This issue will be pivotal for our exploration of origins science.)

Philip Johnson therefore summarises and critically responds to the underlying a priori materialistic impositions, attitudes and views here, in light of many worldview level, logical and  epistemological concerns. (NB: Epistemology is the philosophical study of how do we know, how confidently, how reliably)

It is thus worth pausing to watch or listen, as a first step for this course;  not least, to help clear the air for fresh thinking:



But, all of this imposition of methodological naturalism also embeds a crucial scientific mistake, and that is a pivotal focus for this critical survey course. 
 
For, as an observed pattern in the present, we routinely encounter: (a) natural causes tracing to forces of chance and/or mechanical necessity, and (b) ART-ificial (i.e. intelligent) causes tracing to the action of purposeful agents. [[Where, we can and do routinely recognise intelligence based on our common experience of our own capabilities: "capacities to reason, solve problems, think abstractly, comprehend ideas and language, and learn." ]  And, (c) for certain phenomena -- especially functionally specific, complex organisation and associated information [[FSCO/I] -- the only empirically observed adequate causes are intelligent ones.

Such can be seen intuitively, as the Roman thinker and statesman Cicero did c. 50 BC when he wrote as follows -- anticipating our modern sense of the extreme implausibility of complex, specific, functional information coming about  through blind chance and mechanical forces such as gravity:
. . . Is it possible for any man to behold these things, and yet imagine that certain solid and individual bodies move by their natural force and gravitation, and that a world so beautifully adorned was made by their fortuitous concourse? He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius. I doubt whether fortune could make a single verse of them. How, therefore, can these people assert that the world was made by the fortuitous concourse of atoms, which have no color, no quality—which the Greeks call [poiotes], no sense? [Cicero, THE NATURE OF THE GODS BK II Ch XXXVII, C1 BC, as trans Yonge (Harper & Bros., 1877), pp. 289 - 90.]

Moreover, as Cicero intuitively understood on common sense, it is a well known, easily shown, reliably observable fact that (d) both natural and artificial causes leave observable and distinct empirical traces that we may therefore study and distinguish using scientific methods:



(a) NATURE: The Old Man of the Mountain, New Hampshire, USA; before its collapse in 2003. (Source: Wikipedia)

(b) ART: The profile of George Washington, Mt. Rushmore, South Dakota, USA; while under construction. (Source: Wikipedia)

Fig. I.1 (i) a: A known case of Nature vs. Art: chance circumstances and blind mechanical forces, vs. purpose. (NB: the images also show how art may imitate but be distinct from nature.)





Fig. I.1 (i) b: Dr Don Johnson et al. present a video summary of how we routinely tell nature from art, based on the rule of functionally specific, complex information. (Cf. Dr Robert Marks here on evolutionary informatics.)

As the two facial profiles in Fig. I.1 (i) a show: on our experience and observation, art is consistently the product of intent -- design. By contrast, natural occurrences are driven by (i) blind mechanical forces of "necessity" (e.g. a dropped heavy object falls) and/or by (ii) chance (e.g. having fallen, fair dice tumble and settle to particular values effectively at random). 


Fig. I.1 (ii): Stonehenge (art -- a complex and specific circular arrangement of transported large stones aligned with the sunrise at  Summer Solstice) vs Giant's Causeway (nature -- column-joined basalt rock from a lava flow, complex but unspecified).

Similarly, we may read concerning Stonehenge, from Caroline Alexander in the June 2008 National Geographic:
Blurring past on the A 303 thoroughfare that cuts heedlessly almost across the the monument's very entrance, Stonehenge appears as a cluster of insignificant protrusions on the big, otherwise featureless plain; and yet, even from this profane and glancing vantage, the great-shouldered silhouette is so unmistakably prehistoric that the effect is momentarily of a time-warp cracking onto a lost world . . . its massive lintels are bound to their uprights by mortise- and- tenon joints taken straight from carpentry, an eloquent indication of just how radically new this hybrid monument must have been . . . The people who build Stonehenge had discovered something hitherto unknown, hit upon some truth, turned a corner -- there is no doubt that the carefully placed stones are fraught with meaning.
But what in fact do they mean? Despite countless theories offered over centuries, no one knows. Stonehenge is the most famous relic of prehistory in Europe and one of the best known, most contemplated monuments in the world -- and we have no clear idea what the people who built it used it for. ["If the Stones Could Speak: Searching for the Meaning of Stonehenge," National Geographic, June 2008, p. 36.]
 So, we can see -- in a source not noted for its Design Theory-friendly leanings -- that even in a case where we do not know the purpose or meaning of an evidently designed object, the recognisably (or even demonstrably) artificial: 
(a) will not be credibly the product of observed blind chance and mechanical forces acting in its context, 

(b) will be specifically functional [[it has to do something in particular that depends on having a correct configuration. e.g. a key and a lock or letters in a word], and 

(c) will be sufficiently complex that the specific, functional configuration is not credibly the result of happenstance

The mysterious Voynich Manuscript further underscores this ability to detect design even in absence of knowledge of the particular purpose or meaning:




Fig. I.1 (iii): Page 64 of the mysterious Voynich Manuscript, showing unknown glyphs of unknown meaning (if any) in a string data structure that has statistical patterns reminiscent of natural languages and "word" repetition patterns that may reflect certain East Asian languages. The plant images seem to be by and large composite, but are in effect two-dimensional visual representations and organisation that reflect patterns of plant life. 

The case of the mysterious manuscript allows us to contrast of the overwhelmingly likely result of random spatters of ink and paint on similar pages: a mess, not a complex, specific and evidently purposeful organised arrangement of parts forming an evidently functional whole.  This instantly allows us to see how evidently functionally specific and purposeful complex organisation of parts to form a whole that would not at all be likely to arise from chance distributions of the same or similar components, is a strong empirical indicator of intelligent configuration of what we are seeing. Even, in cases like this, where we do not know who, or why, or how, or when, or even what the meaning or precise function of the complex, apparently purposeful arrangement is.

A commonly heard counter-point  is that in the natural world, we have populations of organisms that through chance variation and differential reproductive success, can adapt and innovate components, step by step across time, creating the appearance of design. 


However, this begs a big question: the very system by which self-replication of cells etc works (and thus, reproduction of life forms), is itself based on a precise, highly specific and complex, detailed functional arrangement of parts. So, how did this come about?

That is why, in the often overlooked Ch II of his Natural Theology, Paley raised the thought exercise of considering the implications of a time-keeping, self-replicating watch:


Suppose, in the next place, that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself -- the thing is conceivable; that it contained within it a mechanism, a system of parts -- a mold, for instance, or a complex adjustment of lathes, baffles, and other tools -- evidently and separately calculated for this purpose . . . .

The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done -- for referring the construction of the watch to design and to supreme art . . . . He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair -- the author of its contrivance, the cause of the relation of its parts to their use.

In short, especially because we now know that the way living cells replicate themselves and carry out their functions is crucially dependent on stored information and an intricate arrangement of molecular nano-machines -- machines that read, maintain, transcribe, and implement instructions that are digitally coded in data strings that use the four-state DNA monomer possibilities, G/C/A/T (with G/C/A/U for the transcribed RNA code that makes proteins) -- we have many additional reasons to see that self-replication (and, onwards, reproduction) is itself a case of the apparently purposefully and specifically organised complex functional organisation and associated information [[FSCO/I] that in our experience strongly points to design.  

It is worth pausing to excerpt the above linked Wikipedia article on information, as a case of testimony against known ideological interest; so that we may be clear about an absolutely central concept for our discussions and reflections in this course:
Information in its most restricted technical sense is an ordered sequence of symbols that can be interpreted as a message. Information can be recorded as signs, or transmitted as signals. Information is any kind of event that affects the state of a dynamic system. Conceptually, information is the message (utterance or expression) being conveyed. This concept has numerous other meanings in different contexts. [1] Moreover, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, representation, and especially entropy . . . .

Information is any type of pattern that influences the formation or transformation of other patterns. In this sense, there is no need for a conscious mind to perceive, much less appreciate, the pattern. Consider, for example, DNA. The sequence of nucleotides is a pattern that influences the formation and development of an organism without any need for a conscious mind.

Systems theory at times seems to refer to information in this sense, assuming information does not necessarily involve any conscious mind, and patterns circulating (due to feedback) in the system can be called information. In other words, it can be said that information in this sense is something potentially perceived as representation, though not created or presented for that purpose . . .
With these ideas in mind, we can now see that when Paley writes of an imaginary watch that:
in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself -- the thing is conceivable; that it contained within it a mechanism, a system of parts -- a mold, for instance, or a complex adjustment of lathes, baffles, and other tools -- evidently and separately calculated for this purpose . . . 
 . . . he is speaking, as a thought exercise, of an information-rich arrangement of components that in effect carries out a step by step procedure that allows the "parent" watch to replicate itself, so creating a "child" watch. He probably had in mind the sort of automata that were then celebrated, which depend on cam bars to control step by step -- algorithmic -- procedures; and/or possibly punched paper tape or card machines like the Jacquard loom then used in the cloth-making industry. Such devices would process stored information (cams, in analogue form; cards or tape, in digital form) and would control effector machines that could in principle carry out the self-replication he envisioned. He then highlighted that the addition of such a capacity would point to further evidence of design.

So also, it is not good enough to brush Paley's remarks aside because the above quote is found in a work of natural theology; for, even theologians can sometimes think logically and correctly on evidence, or indeed, can carry out properly scientific analysis -- which could even happen in a book on "natural theology." (NB: Newton's Principia, which presented his laws of motion and gravitation, contains a "General Scholium," which is largely on natural theology and related philosophy.)  

Nor, will it do to suggest that an intelligent cause for functionally specific complex organisation is not a "scientific" explanation; not if science is concerned to discover the truth about our world and how things happen in it. And, if one suggests that science "must" only explain by the "natural," not the "supernatural," let us observe carefully: this is a rhetorically loaded, distractive strawman that objectors have substituted for what is actually being discussed -- distinguishing (a) natural causes acting by chance and physical necessity and (b) intelligent causes acting by art through deliberate and purposeful choice, on (c) observationally well-warranted, empirically reliable signs

In short, it is plainly not in order to beg the questions at stake by attaching dismissive, distractive, rhetorically loaded labels; an unfortunately all too common resort of advocates of what we may descriptively label, Lewontinian evolutionary materialism (today's form of what was already an ancient view in Plato's day).  

And, we must note, too, on our commonplace experience of the source of FSCO/I, e.g. this very paragraph: the widely observed pattern of the source of organised, functionally specific complex organisation and associated information [FSCO/I] is  that -- once we can see the cause directly -- such generally traces to intelligent and purposeful action.  The whole Internet, the groaning shelves in the major libraries of our world, and the technological world all around us stand in joint, eloquent testimony to that.

Thus, in a reasonable context, it is at least seriously arguable that functionally specified highly complex organisation is a recognisable, observable and empirically reliable sign of intentionally directed configuration.

That is, of design.

This, we may elaborate slightly, on a remark by leading ID researcher, William Dembski:
. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)
Further along in the same book, No Free Lunch (2007), Dembski expands on what complex specified information (CSI) means -- where "specified,"  is often "cashed out" as function in the biological context, hence: functionally specific complex information (FSCI). As a result, we may quite often recognise designs on a common-sense basis when we see a characteristic sign, evidently purposeful and complex configuration that does not appear to be reasonably explained on happenstance.

Given some regrettably contentious debates, however, we now need to pause, elaborate and explain that common-sense inference with some primary documentation and step- by- step points that among other things respond to common rebuttal or dismissal attempts:


B: A key concept: complex (and often functionally) specified information


Pardon a bit of a stiff read so early in our discussion, but it is necessary to now "quote from the horse's mouth,"not least so that some regrettably common and misleading strawman caricatures will be exposed for what they are; rhetorical tricks not serious responses on the merits of fact and solid reasoning. [[NB:  cf. here for a corrective list of such cases, here for a FAQ, and  here, too, for the Creationists' similar complaint.] 

As soon as the cite -- I almost wrote "extraction" -- is over, we will then explain it point by point, so please take heart. 

Here, then, is Dr William Dembski on the Complex Specified Information CSI concept pioneered by leading  Origin of Life researcher Leslie Orgel and others from the 1970's [cf here on below], and which Dembski has built upon:
p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.
I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites: 
Wouters, p. 148: "globally in terms of the viability of whole organisms,"
Behe, p. 148: "minimal function of biochemical systems,"
Dawkins, pp. 148 - 9: "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction."
On p. 149, he roughly cites Orgel's famous remark from 1973, which exactly cited reads:
 In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . .
And, p. 149, he  highlights Paul Davis in The Fifth Miracle: "Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity."] . . .”

p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
This is a bit complex, but it is "from the horse's mouth" from a foundational book and anticipates many misconceptions and opposed talking points that have been used to try to caricature, drown out and dismiss its message. However, since -- as already noted -- it may be a bit overwhelming for the average reader, it needs some boiling down. (Sorry, you will likely need to go through this slowly a few times, it is vital.)

Before we even take up details, we need to pause to underscore the idea that when a set of matching components must be arranged so they can work together to carry out a task or function, this strongly constrains both the choice of individual parts and how they must be arranged to fit together

A jigsaw puzzle is a good case in point. 

So is a car engine -- as anyone who has had to hunt down a specific, hard to find part will know. 

So are the statements in a computer program -- there was once a NASA rocket that veered off course on launch and had to be destroyed by triggering the self-destruct because of -- I think it was -- a misplaced comma.  

The letters and words in this paragraph are like that too.  

That's why (at first, simple level) we can usually quite easily tell the difference between:
A: An orderly, periodic, meaninglessly repetitive sequence: FFFFFFFFFF . . . 


B: Aperiodic, evidently random, equally meaningless text: y8ivgdfdihgdftrs . . . 


C: Aperiodic, but recognisably meaningfully organised sequences of characters: such as this sequence of letters . . .

In short, to be meaningful or functional, a correct set of core components have to match and must be properly arranged, and while there may be some room to vary, it is not true that just any part popped in in any number of ways can fit in.  

The flow-through unidirectional flow lungs we commonly see in birds provide a biological example of this effect (and of the implied challenges to incremental evolution based on small random changes that must provide functional advantages in ecological niches in order to be fixed in a viable population). In these animals, two sets of inflatable sacs are used to pump and pull air through the lungs, which is different from the  more familiar bellows type lung such as we have.


As Michael Denton observed in his epochal 1985 Evolution, a Theory in Crisis:
[[T]he structure of the lung in birds and the overall functioning of the respiratory system is quite unique. No lung in any other vertebrate species is known which in any way approaches the avian system. Moreover, it is identical in all essential details in birds as diverse as humming birds, ostriches and hawks . . . .

Just how such an utterly different respiratory system could have evolved gradually from the standard vertebrate design is fantastically difficult to envisage, especially bearing in mind that the maintenance of respiratory function is absolutely vital to the life of an organism to the extent that the slightest malfunction leads to death within minutes. Just as the feather cannot function as an organ of flight until the hooks and barbules are coadapted to fit together perfectly, so the avian lung cannot function as an organ of respiration until the parabronchi system which permeates it and the air sac system which guarantees the parabronchi their air supply are both highly developed and able to function together in a perfectly integrated manner . . . [[Evolution, a Theory in Crisis, 1985, pp. 210 - 12.]
 In short, we see here a case of an island of irreducibly complex function, on an organ that is literally vital, and that irreducible complexity would arguably block incremental evolution: intermediates between a bellows lung and a bird's flow-through lung, would be most likely lethally defective -- and would at the very least be arguably dis-advantageous -- and so would be selected against by the very same natural selection that is so often appealed to. For, without the right components -- properly arranged and integrated and with the nervous control system and integrated blood circulatory system and muscular systems -- the bird would most likely die within minutes. In short, the way functionally specific complex organisation leads to islands of function in wider configuration spaces is highly relevant to major biological systems, not just technological ones.


As a direct result, in our general experience, and observation, if the functional result is complex and specific enough, the most likely cause is intelligent choice, or design.  

This has a consequence. For, this need for choosing and correctly arranging then hooking up correct, matching parts in a specific pattern implicitly rules out the vast majority of possibilities and leads to the concept of islands of function in a vast sea of possible but meaningless and/or non-functional configurations. And, arguably to design -- the commonly observed cause of FSCO/I -- as the best explanation for such cases. 

So also, if you would dispute the point that such islands of function dependent on specific clusters of combinations of particular parts exist in seas of non-function, as a typical and even reliably observable pattern, it is necessary to support that claim by observed example. That is, show a case where by blind chance and equally blind mechanical necessity, complex functional organisation emerges from non-functional arrangements, and grows in complexity and degree of successful operation from one step to the next; with particular reference to the rise of new major body plans in life forms. Variations and adaptations within existing body plans do not answer to this. That is, the challenge is to get to shorelines of islands of function in seas of non-function, or else to show that here is a vast continent of function that can be incrementally accessed through a branching tree of life.

On fair comment, despite the various lines of evidence and the many headlined icons of evolution that are put forth to make Darwinian evolutionary mechanisms seem plausible, this challenge has not been met after over 150 years of trying. Consequently, it is equally fair comment to observe that such functionally specific, complex organisation and associated information have only one empirically observed, adequate cause: purposeful, intelligently directed configuration, i.e. design. Therefore, design theorists argue that the world of life points on such empirically reliable signs to design as a key causal factor in the origins of life as we see and experience it. But, in turn, that has to be shown, not simply asserted.

So, in more detailed steps of thought:
i: When we see as an actual event a piece of complex, specified information, E -- e.g. the string of letters forming the text of the above paragraphs (NB: this is a case of definition by key examples and close family resemblance) --  it is usually one instance of a set of other similar possible cases in T, a target zone or more generally a specific (identifiable) zone of interest. Where in turn, we may see that the component parts, p1, p2, . . . pn, may be arranged in a much wider set of possible states or ways, W.
(This gives more precise definition to the idea of an island of meaningful, functional configurations of a string of symbols, sitting in the midst of a vast ocean of possible arrangements of values for the symbols in the string p1-p2-p3- . . . -pn, by far and away mostly gibberish, like: ufiwdhgj. Islands of function in oceans of non-function.)

ii: Next, a bit of background math: if we multiply 2 by itself 500 times, we see that 2^500 = 3.27 * 10^150 in scientific notation; where that is a way to write a large number that is about 3 followed by 150 zeros. (Also, the caret symbol is used to represent raising the 10 to a power, here 150. This is fairly common in computing circles. Sometimes, we might see this as 3.27 e 150 or the like.)

iii:  So, if the specifying description of T is such that information cases like E that fit in target zone T within the much wider set of possibilities W, require at least 500 bits of functional, meaningful information to fulfill the specification, the odds of the particular, observed case E being picked up by pure chance from the set of all strings of bits of the same length, W, are about 1 in 10^150. Where,
for 10^80 atoms in the observed cosmos, changing quantum state every Planck time [~ 5* 10^-44 s, rounded down to 10^-45 s],  for its credible thermodynamic lifespan [10^25 s, or about 50 mn times the 13.7 BY usually held to have elapsed since the big bang], the number of possible quantum states is ~ 10^150. (NB: there are about 10^20, a hundred billion billion, Planck times in the fastest known nuclear interactions. The fastest chemical interactions would take up about 10^30 Planck times.)

iv: Now, the description of T is not the same as E, i.e. we can say what sort of thing E is, without directly quoting the bit-string that specifies E.
[[To clarify by example, think of E as being a case of -- a specification, T: "a string of 72 ASCII characters in English that fits this context of discussion" -- say: "this is a text string of seventy-two ascii characters in english languag" -- or the like; and yes, I had to truncate to make it fit. E is one instance of such 72-character strings, the Target Zone T. Many such strings are possible, but the constraints that the strings be (a) in English and (b) responsive to the context of this discussion sharply constrict the list of acceptable strings. Certainly, by comparison to the set of over 10^150 possible strings of 72 ASCII characters, W; by far and away most of which will be gibberish. Practical tests support the common-sense conclusion that if we see such a string, E, it is best explained as being the product of an intelligent writer. Spaces of about 10^50 possibilities [[24 ASCII characters] have been successfully searched by random walks and trial and error, but spaces of 10^150 or more are far harder than that. For comparison, the DNA in the "simplest" observed cases of metabolising and self-replicating cells is from more than 100,000 to 1 million bases, or more than 9.98 * 10^60,205 possibilities. To get to complex new body plans for multicellular life forms requires explaining the further origin of more than 10 million functional DNA bases (more than 8.19 * 10^6,020,599 possibilities), dozens of times over. Scientists have not observed a smooth, step by step trial and error progress from simpler but functional systems, to the observed cases that does not involve intelligent direction, and there is no credible step by step progressive chance-driven trial and error path from say "See Spot run," to the equivalent, a book; or from say a Hello World program, to a word processor or a system controller. The best, abundantly empirically supported explanation for functionally specific, complex information and organisation is design.  (NB: For more on inference to the best current explanation and its role in scientific knowledge, cf. here and here. Note also, concerns and limitations as discussed here.)]
v: The odds of the specifically describable string E being picked by pure chance are about 1 in 10^150, as noted, but of course, I (an intelligent designer) used knowledge, skill and Open Office Writer's properties feature to create it in about three minutes. 

vi: At this point, it is common for some to suggest that Dawkins' "Mt Improbable" can be climbed by the easy back-slope, step by step to the peak, as chance variations that give an increase in performance are rewarded with advantages that allow them to become the next stage of progress. And, of course, the "methinks it is like a weasel" example shows how a string of 28 random characters can, after maybe 40 - 60 generations, become the target phrase. For instance, in his best-selling The Blind Watchmaker (1986), pp. 48 ff. Dawkins published the following computer simulation "run":

 1 WDL*MNLT*DTJBKWIRZREZLMQCO*P
2? WDLTMNLT*DTJBSWIRZREZLMQCO*P
10 MDLDMNLS*ITJISWHRZREZ*MECS*P
20 MELDINLS*IT*ISWPRKE*Z*WECSEL
30 METHINGS*IT*ISWLIKE*B*WECSEL
40 METHINKS*IT*IS*LIKE*I*WEASEL
43 METHINKS*IT*IS*LIKE*A*WEASEL

vii: What is not so commonplace, is to see an admission of the implications of the stunning admission Dawkins had to make even as he presented the Weasel phrase "example" of the power of so-called "cumulative selection," even when the caveats are cited:
I don't know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. [[NB: cf. Wikipedia on the Infinite Monkeys theorem here, to see how unfortunately misleading this example is.] Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence 'Methinks it is like a weasel', and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence? . . . .

It . . . begins by choosing a random sequence of 28 letters ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . . What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed . . . .

Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [[TBW, Ch 3, as cited by Wikipedia, various emphases, highlights and colours added.]
viii: In short, here cumulative selection "works" by rewarding non-functional phrases that happen to be closer to the already known target.  This is the very opposite of natural selection on already present difference in function. Dawkins' weasel is not a good model of what evolution is supposed to do.

ix: At most, it illustrates that once we are already on an island of function, chance variation and differences in reproductive success may lead to specialisation to fit particular niches. Which is accepted by all, including modern Young Earth Creationists. And, more sophisticated genetic algorithms have very similar failings. For, (a) they implicitly start within an island of function, that (b) has a predominantly smoothly rising slope that gently leads to peaks of performance so that "hill-climbing" on "warmer/colder" signals will usually get you pointed the right way.
x: In short, GA's do not only start on the shores of an island of function, but also the adaptation targets are implicitly pre-loaded into the program [[even in cases where they are allowed to wiggle about a bit] and so are the "hill-climbing algorithm" means to climb up to them. This point has been highlighted by famed mathematician Gregory Chaitin, in a recent paper, Life as Evolving Software (Sept. 7, 2011):
. . . we present an information-theoretic analysis of Darwin’s theory of evolution, modeled as a hill-climbing algorithm on a fitness landscape.  Our space of possible organisms consists ofcomputer programs, which are subjected to random mutations.  We study the random walk of increasing fitness made by a single mutating organism. [[p.1]
xi: Plainly, this more sophisticated approach is a model of optimising adaptation by generic hill-climbing, within an island of function; i.e. this is at best a model of micro-evolution within a body plan, not origin of such complex, integrated body plans.
xii: So, while engineers -- classic intelligent designers! -- may well find such algorithms quite useful in some cases of optimisation and system design, they fail the red-herring- strawman test when they are presented as models of microbe to man evolution.
xiii: For, they do not answer to the real challenge posed by the design theorists: how to get to an island of complex function -- i.e. to a new body plan that for first life would require something like 100,000 base pairs of DNA and associated molecular machinery, and for other body plans from trees to bees, bats, birds snakes, worms and us, at least 10 million bases, dozens of times over --  without intelligent direction.

xiv: Instead, we can present a key fact, one that Weasel actually inadvertently demonstrates. That is: in EVERY instance of such a case of CSI, E from such a zone of interest or island of function, T, where we directly know the cause by experience or observation, it originates by similar intelligent design.
And, given the long odds involved to get such an E by pure chance -- you cannot have a hill-climbing success amplifier until you first have functional success! -- that is no surprise at all.
(The Internet and the major libraries of the world, together, have billions of successful tests of this claim. On years of experience with suggested counter examples, they are consistently dubious or outright errors, as a rule being illustrations of the very point they were meant to oppose. E.g. the drawings of canals on Mars from 100 years ago, if they were of real canals on Mars would be evidence of a Martian civilisation. Alas, they are inaccurate, and instead are drawings that were intelligently designed to show what the astronomers of that time thought they saw on Mars.) 
xv: Why should this be so? Let us consider: in the 10^17 or so seconds on its conventional timeline the 10^57 or so atoms of our solar system (our practical "world") will have gone through maybe as many as some -- oops, corrected 12:06:01 -- 10^117 Planck-time quantum states. (We note, it takes about 10^30 such for the fastest chemical reactions, and many more for the organic chemistry type reactions relevant to so much of cell based life.)  But 10^150 possibilities is 10^33 times as much as that, so our solar system could not search out more than a negligible fraction of 10^150 possibilities. Where, we can see that a string of 500 bits has 2^ 500 = 3.27*10^150 possible configurations. For just 500 bits [[~ 72 ASCII characters], on the gamut of our solar system, there is just too much haystack to reasonably expect to find the proverbial lost needle.
xvi: To understand this better, let us work back from how it takes ~ 10^30 Planck time states for the fastest chemical reactions, and use this as a yardstick, i.e. in 10^17 s, our solar system's 10^57 atoms would undergo ~ 10^87 "chemical time" states, about as fast as anything involving atoms could happen. That is 1 in 10^63 of 10^150. So, let's do an illustrative haystack calculation:
 Let us take a straw as weighing about a gram and having comparable density to water, so that a haystack weighing 10^63 g [= 10^57 tonnes] would take up as many cubic metres. The stack, assuming a cubical shape, would be 10^19 m across. Now, 1 light year = 9.46 * 10^15 m, or about 1/1,000 of that distance across. If we were to superpose such a notional 1,000 light years on the side haystack on the zone of space centred on the sun, and leave in all stars, planets, comets, rocks, etc, and take a random sample equal in size to one straw, by absolutely overwhelming odds, we would get straw, not star or planet etc. That is, such a sample would be overwhelmingly likely to reflect the bulk of the distribution, not special, isolated zones in it.
xvii: In the case of biology, since 1953 we have known that in the heart of the cell lies the coded information storing molecule DNA. It is worth excerpting the remark made by Sir Francis Crick, one of the discoverers, to his son Michael in a March 19, 1953 letter:
"Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)." [[Emphases added. From about 1961 on, that code has been identified, and is now routinely used in scientific work.]
xviii: So, whether we are interested in the origin of life as Dr Eigen was, or in the origin/evolution of new complex structures requiring the equivalent of 500 or more bits of information, we need to look at the CSI information generating hurdle.


xix: Later on (2005), Dembski provided a slightly more complex formula, that we can quote and simplify, showing that it boils down to a "bits from a zone of interest [[in a wider field of possibilities] beyond a reasonable threshold of complexity" metric:
χ = – log2[10^120 ·ϕS(T)·P(T|H)].
--> χ is "chi" and ϕ is "phi"
xx: To simplify and build a more "practical" mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally:
Ip = - log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say -- as just one of many examples of a standard result -- Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2:
Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[--> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by
I_k = (def) log_2  1/p_k   (13.2-1)
xxi: So, since 10^120 ~ 2^398, we may "boil down" the Dembski metric using some algebra -- i.e. substituting and simplifying the three terms in order -- as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = log (p):
Chi = – log2(2^398 * D2 * p), in bits,  and where also D2 = ϕS(T)
Chi = Ip – (398 + K2), where now: log2 (D2 ) = K
That is, chi is a metric of bits from a zone of interest, beyond a threshold of "sufficient complexity to not plausibly be the result of chance,"  (398 + K2).  So,
(a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and
(b) as we can define and introduce a dummy variable for specificity, S, where
(c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:
Chi =  Ip*S – 500, in bits beyond a "complex enough" threshold
  • NB: If S = 0, this locks us at Chi = - 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive. 
  • E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will -- unsurprisingly -- be positive. 
  • S goes to 1 when we have objective grounds -- to be explained case by case -- to assign that value. 
  • That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list. 
  • A string at random is a list with one member, but if we pick it as a password, it is now a zone with one member.  (Where also, a lottery, is a sort of inverse password game where we pay for the privilege; and where the complexity has to be carefully managed to make it winnable. )
  • An obvious example of such a zone T, is code symbol strings of a given length that work in a programme or communicate meaningful statements in a language based on its grammar, vocabulary etc. This paragraph is a case in point, which can be contrasted with typical random strings ( . . . 68gsdesnmyw . . . ) or repetitive ones ( . . . ftftftft . . . ); where we can also see by this case how such a case can enfold random and repetitive sub-strings. 
  • Arguably -- and of course this is hotly disputed -- DNA protein and regulatory codes are another. Design theorists argue that the only observed adequate cause for such is a process of intelligently directed configuration, i.e. of  design, so we are justified in taking such a case as a reliable sign of such a cause having been at work. (Thus, the sign then counts as evidence pointing to a perhaps otherwise unknown designer having been at work.)
  • So also, to overthrow the design inference, a valid counter example would be needed, a case where blind mechanical necessity and/or blind chance produces such functionally specific, complex information. (Points xiv - xvi above outline why that will be hard indeed to come up with. There are literally billions of cases where FSCI is observed to come from design.)

xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases:
Using Durston’s Fits values -- functionally specific bits -- from his Table 1, to quantify I, so also  accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:
RecA: 242 AA, 832 fits, Chi: 332 bits beyond
SecY: 342 AA, 688 fits, Chi: 188 bits beyond
Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond


xxiii: And, this raises the controversial question that biological examples such as DNA -- which in a living cell is much more complex than 500 bits -- may be designed to carry out particular functions in the cell and the wider organism.

xxiv: We may then discuss:
Q1: How then do search algorithms -- such as genetic ones -- so often succeed?

A1: Generally, by intelligently directed injection of active information. That is, information that enables searching guided by an understanding of the search space or the general or specific location of a target. (Also, cf. here. A so-called fitness function which more or less smoothly and reliably points uphill to superior performance, mapped unto a configuration space, implies just such guiding information and allows warmer/colder signals to guide hill-climbing. This or the equivalent, appears in many guises in the field of so-called evolutionary computing. As a rule of thumb, if you see a "blind" search that seemingly delivers an informational free lunch, look for an inadvertent or overlooked injection of active information. [[Cf. here, here & here.]) In a simple example, the children's party game, "treasure hunt," would be next to impossible without a guidance, warmer/colder . . . hot . . . red hot. (Something that gives some sort of warmer/colder message on receiving a query, is an oracle.) The effect of such sets of successive warmer/colder oracular messages or similar devices, is to dramatically reduce the scope of search in a space of possibilities. Intelligently guided, constrained search, in short, can be quite effective. But this is designed, insight guided search, not blind search. From such, we can actually quantify the amount of active information injected, by comparing the reduction in degree of difficulty relative to a truly blind random search as a yardstick. And, we will see the remaining importance of the universal or solar system level probability or plausibility bound [[cf. Dembski and Abel, also discussion at ENV] which in this course will for practical purposes be 500 - 1,000 bits of information -- as we saw above, i.e. these give us thresholds where the search is hard enough that design is a more reasonable approach or explanation. Of course, we need not do so explicitly, we may just look at the amount of active information involved.

Q2: But, once we have a fitness function, all that is needed is to start anywhere and then proceed up the slope of the hill to a peak, no need to consider all of those outlying possibilities all over the place. So, you are making a mountain out of a mole-hill: why all the fuss and feathers over "active information," "oracles" and "guided, constrained search"?

A2: Fitness functions, of course, are a means of guided search, by providing an oracle that points -- generally -- uphill. In addition, they are exactly an example of constrained search: there is function present everywhere in the zone of interest, and it follows a generally well-behaved uphill-pointing pattern. In short, from the start you are constraining the search to an island of function, T, in which neighbouring or nearby locations: Ei, Ej, Ek, etc . . .  -- which can be chosen by tossing out a ring of "nearby" random tries -- are apt to go uphill, or get you to another local slope pointing uphill. Also, if you are on the shoreline of function, tosses that have no function will eliminate themselves by being obviously downhill; which means it is going to be hard to island hop from one fairly isolated zone of function to the next.  In short, a theory that may explain micro-evolutionary change within an island or cluster of nearby islands, is not simply to be extrapolated to one that needs to account for major differences that have to bridge large differences in configuration and function. This is not going to be materially different if the islands of function and their slopes and peaks of function grow or shrink a bit across time or even move bodily like glorified sand pile barrier islands are wont to, so long as such island of function drifting is gradual. Catastrophic disappearance of such islands, of course, would reflect something like a mass extinction event due to an asteroid impact or the like. Mass extinctions simply do not create new functional body plans, they sweep the life forms exhibiting existing body plans away, wiping the table almost wholly clean, if we are to believe the reports.  Where also, the observable islands of function effect starts at the level of the many isolated protein families, that are estimated to be as 1 in 10^64 to 1 in 10^77 or so of the space of Amino Acid sequences. As ID researcher Douglas Axe noted in a 2004 technical paper: "one in 10^64 signature-consistent sequences forms a working domain . . . the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences." So, what has to be reckoned with, is  that in general for a sufficiently complex situation to be relevant to FSCO/I [[500 - 1,000 or more structured yes/no questions, to specify configurations, En . . . ], the configuration space of possibilities, W, is as a rule dominated by seas of non-functional gibberish configurations, so that the envisioned easy climb up Mt Improbable is dominated by the prior problem of finding a shoreline of Island Improbable.

Q3: Nonsense! The Tree of Life diagram we all saw in our Biology classes proves that there is a smooth path from the last universal common ancestor [LUCA] to the different body plans and forms, from microbes to Mozart. Where did you get such nonsense from?

A3: Indeed, the tree of life was the only diagram in Darwin's Origin of Species. However, it should be noted that it was a speculative diagram, not one based on a well-documented, observed pattern of gradual, incremental improvements. He hoped that in future decades, investigations of fossils over the world would flesh it out, and that is indeed the impression given in too many Biology textbooks and popular headlines about found "missing links." But, in fact, the typical tree of life imagery:
Fig. G.11c, anticipated: A typical, popular level tree of life model/illustration. (Source.)
 . . . is too often presented in a misleading way. First, notice the skipping over of the basic problem that without a root, neither trunks nor branches and twigs are possible. And, getting to a first, self-replicating unicellular life form -- the first universal common ancestor, FUCA -- that uses proteins, DNA, etc through the undirected physics and chemistry of Darwin's warm little electrified pond full of a prebiotic soup or the like, continues to be a major and unsolved problem for evolutionary materialist theorising. Similarly, once we reckon with claims about "convergent evolution" of eyes, flight, whale/bat echolocation "sonar" systems, etc. etc., we begin to see that "everything branches, save when it doesn't." Indeed, we have to reckon with a case where on examining the genome of a kangaroo (the tammar wallaby), it was discovered that "In fact there are great chunks of the [[human] genome sitting right there in the kangaroo genome." The kangaroos are marsupials, not placental mammals, and the fork between the two is held to be 150 million years old. So, Carl Wieland of Creation Ministries incorporated, was fully in his rights to say: "unlike chimps, kangaroos are not supposed to be our ‘close relatives’ . . . . Evolutionists have long proclaimed that apes and people share a high percentage of DNA. Hence their surprise  at these findings that ‘Skippy’ has a genetic makeup similar to ours."  Next, so soon as one looks at molecular similarities -- technically, homologies (and yes, this is an argument from similarity, i.e analogy in the end) -- instead of those of gross anatomy, we run into many, mutually conflicting "trees." Being allegedly 95 - 98+% Chimp in genetics is one thing, being what, ~ 80% kangaroo or ~ 50% banana or the like, is quite another. That is, we need to look seriously at the obvious alternative from the world of software design: code reuse and adaptation from a software library for the genome. Worse, in fact the consistent record from the field (which is now "almost unmanageably rich" with over 250,000 fossil species, millions of specimens in museums and billions in the known fossil beds), is that we do NOT observe any dominant pattern of origin of body plans by smooth incremental variations of successive fossils. Instead, as Steven Jay Gould famously observed, there are systematic gaps, right from the major categories on down. Indeed, if one looks carefully at the tree illustration above, one will see where the example life forms are: on twigs at the end of branches, not the trunk or where the main branches start. No prizes for guessing why. That is why we should carefully note the following remark made in 2006 by W. Ford Doolittle and Eric Bapteste:
Darwin claimed that a unique inclusively hierarchical pattern of relationships between all organisms based on their similarities and differences [the Tree of Life (TOL)] was a fact of nature, for which evolution, and in particular a branching process of descent with modification, was the explanation. However, there is no independent evidence that the natural order is an inclusive hierarchy, and incorporation of prokaryotes into the TOL is especially problematic. The only data sets from which we might construct a universal hierarchy including prokaryotes, the sequences of genes, often disagree and can seldom be proven to agree. Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true. This is not to say that similarities and differences between organisms are not to be accounted for by evolutionary mechanisms, but descent with modification is only one of these mechanisms, and a single tree-like pattern is not the necessary (or expected) result of their collective operation . . . [[Abstract, "Pattern pluralism and the Tree of Life hypothesis," PNAS vol. 104 no. 7 2043-2049.]
Q4: But, the evidence shows that natural selection is a capable designer and can create specified complexity. Isn't that what Wicken said to begin with in 1979 when he said that "Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order’ . . ."?

A4: We need to be clear about what natural selection is and does. First, you need a reproducing population, which has inheritable chance variations [[ICV], and some sort of pressure on it from the environment, leading to gradual changes in the populations because of differences in reproductive success [[DRS] . . . i.e. natural selection [[NS] . . . among varieties; achieving descent with modification [[DWM]. Thus, different varieties will have different degrees of success in reproduction: ICV + DRS/NS --> DWM. However, there is a subtlety: while there is a tendency to summarise this process as "natural selection, "this is not accurate. For the NS component actually does not actually ADD anything, it is a short hand way of saying that less "favoured" varieties (Darwin spoke in terms of "races") die off, leaving no descendants. "Selection" is not the real candidate designer. What is being appealed to is that chance variations create new varieties. So, this is the actual supposed source of innovation -- the real candidate designer, not the dying off part. That puts us right back at the problem of finding the shoreline of Island Improbable, by crossing a "sea of non-functional configurations" in which -- as there is no function, there is no basis to choose from: where the evidence suggests a simple cell would have ~ 100,000 to 1 million bases in its genome, but new body plans -- dozens needed for the Cambrian life forms -- requires more like ~ 10 - 100+ millions. So, we cannot simply extrapolate a theory that may relate to incremental changes within an island of function, to the wider situation of origin of functions. Macroevolution is not simply accumulated micro evolution, not in a world of complex, configuration-specific function. (NB: The suggested "edge" of evolution by such mechanisms is often held to be about the level of a taxonomic family, like the cats or the dogs and wolves.)

Q5: The notion of "islands of function" is Creationist nonsense, and so is that of "active information." Why are you trying to inject religion and "God of the gaps" into science?

A5: Unfortunately, this is not a caricature: there is an unfortunate  tendency of Darwinist objectors to design theory to appeal to prejudice against theistic worldviews, and to suggest questionable motives, that are used to cloud issues and poison or polarise discussion. But, I am sure that if I were to point out that such Darwinists often have their own anti-theistic ideological agendas and have sought to question-beggingly redefine science as in effect applied atheism or the like, that would often be regarded as out of place. Let us instead stick to the actual merits. Such as, that since intelligent designers are an observed fact of life, to explain that design is a credible or best causal explanation in light of tested reliable signs that are characteristic of design, such as FSCO/I, is not an appeal to gaps. Similarly, to point to ART-ifical causes that leave characteristic traces by contrast with those of chance and/or mechanical necessity, is not to appeal to "the supernatural," but to the action of intelligence on signs that are tested and found to reliably point to it. Nor, is design theory to be equated to Creationism, which can be seen as an attempt to interpret origins evidence in light of what are viewed as accurate record of the Creator. The design inference works back from inductive study of signs of chance, necessity and art, to cases where we did not observe the deep past, but see traces that are closely similar to what we know that the only adequate, observed cause is design. So also, once we see that complex function dependent on many parts that have to be properly arranged and coupled together, sharply constrains the set of functional as opposed to non-functional configurations, the image of "islands of function" is not an unreasonable way to describe the challenge. Where also, we can summarise a specification as a structured list of YES/NO questions that give us a sufficient description of the working configuration. Which in turn gives us a way to understand Kolmogorov-Chaitin complexity or descriptive complexity of a bit-string x, in simple terms: "the length of the shortest program that computes x and halts." This can be turned into a description of zones of interest T that are specified in large spaces of possible configurations, W. If there is a "simple" and relatively short description, D, that allows us to specify T without in effect needing to list and state the configs that are in T, E1, E2, . . En, then T is specific. Where also, if T is such that D describes a configuration-dependent function, T is functionally specific, e.g. strings of ASCII characters in this page form English sentences, and address the theme of origins science in light of intelligent design issues. In the -- huge! -- space of possible ASCII strings of comparable length to this page (or even this paragraph), such clusters of sentences are a vanishingly minute fraction relative to the bulk that will be gibberish. So also, in a world where we often use maps or follow warmer/colder cues to find targets, and where if we were to blindly select a search procedure and match it at random to a space of possibilities, we would be at least as likely to worsen as to improve odds of success relative to a simple blind at-random search of the original space of possibilities, active information that gives us an enhanced chance of success in getting to an island of function is in fact a viable concept.
Therefore, on balance we have at least one reasonably possible general empirical sign of intelligent design, namely: functionally specific, complex organisation and associated information [[FSCO/I]

Given such a possible sign, we can now understand the key error in methodological naturalism. For, the “natural causes only” rule has the effect of implicitly imposing a philosophical -- specifically, materialistic -- conclusion on origins science studies before potentially observable facts can speak scientifically

So, the focus of the regrettably contentious debate now emerges:




C: The ideological debate emerges: a priori evolutionary materialism and the [[re-]definition of science
 
We can see that: science is widely respected precisely because it is seen as being based on freely -- and in significant part successfully -- seeking and discovering the truth about our world based on well-balanced logical (especially mathematical) analysis of observable facts. 
(NB: “Truth,” here, is being understood more or less as Aristotle described it 2,300 years ago in Metaphysics 1011b: “. . . to say of what is that it is, and of what is not that it is not, is true.”
Now, too, if we consult reference-quality dictionaries from just a few decades ago, we will easily see that the traditional, historic view of science (as just summarised) was still the consensus:
science: a branch of knowledge conducted on objective principles [["objective: external to the mind; actually existing; real"] involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [[Concise Oxford Dictionary, (Oxford: Clarendon Press) 1990 -- and yes, they used the "z." (Emphasis and definition of objectivity from the same source added.)]

scientific method: principles and procedures for the systematic pursuit of knowledge [[= "the body of truth, information and principles acquired by mankind"] involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [[Webster's 7th Collegiate Dictionary, (Springfield, Mass: G & C Merriam), 1965. (Definition of "Knowledge" in the same dictionary inserted, and emphasis added.)]

So, the attempted imposition of the "rule" of methodological naturalism as an absolute criterion of science is in fact a very recent development, and is in part motivated by conflicts over origins science.  

Plainly, however, issues of truth-seeking about our world and degree of warrant for conclusions cannot be settled by stipulating such an artificial constraint. 

Moreover, a key danger of putting materialistic philosophical blinkers on science is that it can easily lead on to the practical establishment of materialistic ideology under false colours of “truth” or the closest practical approximation we can get to it. 

Where that happens, those who object may then easily find themselves tagged and dismissed as pseudo-scientific (or even fraudulent) opponents of progress, knowledge, right and truth; which can then lead on to very unfair or even unjust treatment at the hands of those who wield power. Therefore, if religious censorship of science (as in part happened to Galileo etc.) was dangerous and unacceptable, materialist censorship must also be equally wrong.

Nor is this danger merely imaginary or a turn-about false accusation, as some would suggest
 
For, we may read from Harvard Professor Richard Lewontin's 1997 New York Review of Books review of the late Cornell Professor Carl Sagan's The Demon-Haunted World, as follows:

. . . to put a correct view of the universe into people's heads we must first get an incorrect view out . . .   the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [[--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . . To Sagan, as to all but a few other scientists, it is self-evident [[--> actually, science and its knowledge claims are plainly not immediately and necessarily true on pain of absurdity, to one who understands them; this is another logical error, begging the question , confused for real self-evidence; whereby a claim shows itself not just true but true on pain of patent absurdity if one tries to deny it . . ] that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test  [[--> i.e. an assertion that tellingly reveals a hostile mindset, not a warranted claim] . . . .  
 
It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. The eminent Kant scholar Lewis Beck used to say that anyone who could believe in God could believe in anything. To appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen. [[--> Perhaps the second saddest thing is that some actually believe that these last three sentences that express hostility to God and then back it up with a loaded strawman caricature of theism and theists JUSTIFY what has gone on before. As a first correction, accurate history -- as opposed to the commonly promoted rationalist myth of the longstanding war of religion against science -- documents (cf. here, here and here) that the Judaeo-Christian worldview nurtured and gave crucial impetus to the rise of modern science through its view that God as creator made and sustains an orderly world. Similarly, for miracles -- e.g. the resurrection of Jesus -- to stand out as signs pointing beyond the ordinary course of the world, there must first be such an ordinary course, one plainly amenable to scientific study. The saddest thing is that many are now so blinded and hostile that, having been corrected, they will STILL think that this justifies the above. But, nothing can excuse the imposition of a priori materialist censorship on science, which distorts its ability to seek the empirically warranted truth about our world.]

[[From: “Billions and Billions of Demons,” NYRB, January 9, 1997. Bold emphasis and notes added. 
___________

F/N: The key part of this quote comes after some fairly unfortunate remarks where Mr Lewontin gives the "typical" example -- yes, we can spot a subtext -- of an ill-informed woman who dismissed the Moon landings on the grounds that she could not pick up Dallas on her TV, much less the Moon. This is little more than a subtle appeal to the ill-tempered sneer at those who dissent from the evolutionary materialist "consensus," that they are ignorant, stupid, insane or wicked. For telling counter-instance, Wernher von Braun, the designer of the rocket that took NASA to the Moon, was an evangelical Christian and a Creationist. [[Cf also here, here, here, here, here.] 
Similarly, when Lewontin cites "eminent Kant scholar Lewis Beck" as declaring that "anyone who could believe in God could believe in anything," drawing as bottom-line, the inference that "[[t]o appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen," this is a sadly sophomoric distortion. One that fails to understand that, on the Judaeo-Christian theistic view, for miracles to stand out as signs pointing beyond the ordinary, there must first be an ordinary consistently orderly world, one created by the God of order who "sustains all things by his powerful word." Also, for us to be morally accountable to God -- a major theme in theism, the consequences of our actions must be reasonably predictable, i.e. we must live in a consistent, predictably orderly cosmos, one that would be amenable to science.
And, historically, it was specifically that theistic confidence in an orderly cosmos governed by a wise and orderly Creator that gave modern science much of its starting impetus from about 1200 to 1700.
For instance that is why Newton (a biblical theist), in the General Scholium to his famous work Principia, confidently said: 
"[[t]his most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful Being . . . It is allowed by all that the Supreme God exists necessarily; and by the same necessity he exists always, and every where. [[--> i.e. he accepts the cosmological argument to God] . . . We know him only by his most wise and excellent contrivances of things, and final cause [[ --> i.e from his designs] . . . Blind metaphysical necessity, which is certainly the same always and every where, could produce no variety of things. [[--> i.e. necessity does not produce contingency].  All that diversity of natural things which we find suited to different times and places could arise from nothing but the ideas and will of a Being necessarily existing. [[--> That is, he implicitly rejects chance, Plato's third alternative and explicitly infers to the Designer of the Cosmos.]"    
In such a context of order stamped in at creation and sustained through God's power, for good reason, God may then act into the world in ways that go beyond the ordinary, i.e. miracles are possible but will inevitably be rare and in a context that points to such a higher purpose.
For instance, the chief miracle claim of Christian thought, the resurrection of Jesus with 500+ witnesses is presented in the NT as decisive evidence for the truth of the gospel and authentication of God's plan of redemption.
So, since these contextual remarks have been repeatedly cited by objectors as though they prove the above cite is an out of context distortion that improperly makes Lewontin seem irrational in his claims,  they have to be mentioned, and addressed, as some seem to believe that such a disreputable "context" justifies the assertions and attitudes above!)]
Mr Lewontin and a great many other leading scientists and other influential people in our time clearly think that such evolutionary materialist scientism is the closest thing to the "obvious" truth about our world we have or can get. This has now reached to the point where some want to use adherence to this view as a criterion of being “scientific,” which to such minds is equivalent to “rational.” 

Mr Martin Mahner of the Center for Inquiry-Europe, says much the same in his recent Science and Education article, "The role of Metaphysical Naturalism in Science" [[2011]:
This paper defends the view that metaphysical naturalism is a constitutive ontological principle of science in that the general empirical methods of science, such as observation, measurement and experiment, and thus the very production of empirical evidence, presuppose a no-supernature principle . . . . Metaphysical or ontological naturalism (henceforth: ON) [["roughly" and "simply"] is the view that all that exists is our lawful spatiotemporal world. Its negation is of course supernaturalism: the view that our lawful spatiotemporal world is not all that exists because there is another non-spatiotemporal world transcending the natural one, whose inhabitants—usually considered to be intentional beings—are not subject to natural laws . . . . Both scientists and science educators keep being challenged by creationists of all shades, who try hard to reintroduce supernaturalist explanations into biology and into all the areas of science that concern the origin of the world in general and of human beings in particular. A major aspect of this debate is the role of ON in science . . . .

ON is not part of a deductive argument in the sense that if we collected all the statements or theories of science and used them as premises, then ON would logically follow. After all, scientific theories do not explicitly talk about anything metaphysical such as the presence or absence of supernatural entities: they simply refer to natural entities and processes only. Therefore, ON rather is a tacit metaphysical supposition of science, an ontological postulate. It is part of a metascientific framework or, if preferred, of the metaparadigm of science that guides the construction and evaluation of theories, and that helps to explain why science works and succeeds in studying and explaining the world. Now this can be interpreted in a weak and a strong sense. In the weak sense, ON is only part of the metaphysical background assumptions of contemporary science as a result of historical contingency; so much so that we could replace ON by its antithesis any time, and science would still work fine. This is the view of the creationists, and, curiously, even of some philosophers of science (e.g., Monton 2009). In the strong sense, ON is essential to science; that is, if it were removed from the metaphysics of science, what we would get would no longer be a science. Conversely, inasmuch as early science accepted supernatural entities as explainers, it was not proper science yet. It is of course this strong sense that I have in mind when I say that science presupposes ON.


The degree to which this article ties the course of reasoning to a strawman contrast, natural vs supernatural, is inadvertently revealing. 

By sharp contrast, ever since Plato's day, it has been quite clear that the natural and the artificial can leave reliable empirical traces that can be detected and distinguished using appropriate observational and logical methods of analysis. So, it is very important to notice how this false contrast plays so pivotal a rhetorical and ideological role in polarising origins science; lest that utterly unwelcome Divine Foot steps in the doorway.

Mr Jerry Coyne, a University of Chicago professor in its Department of Ecology and Evolution (and an anti-design advocate, former Christian believer and New Atheist), has pointedly summed up where this view of science points for our civilisation (not just the USA), in a recent, regrettably strident USA Today article:

Religion in America is on the defensive.

Atheist books such as The God Delusion and The End of Faith have, by exposing the dangers of faith and the lack of evidence for the God of Abraham, become best-sellers. Science nibbles at religion from the other end, relentlessly consuming divine explanations and replacing them with material ones. Evolution took a huge bite a while back, and recent work on the brain has shown no evidence for souls, spirits, or any part of our personality or behavior distinct from the lump of jelly in our head. We now know that the universe did not require a creator. Science is even studying the origin of morality. So religious claims retreat into the ever-shrinking gaps not yet filled by science . . . .

Science and faith are fundamentally incompatible, and for precisely the same reason that irrationality and rationality are incompatible. They are different forms of inquiry, with only one, science, equipped to find real truth. And while they may have a dialogue, it's not a constructive one. Science helps religion only by disproving its claims, while religion has nothing to add to science . . . . any progress — not just scientific progress — is easier when we're not yoked to religious dogma. Of course, using reason and evidence won't magically make us all agree, but how much clearer our spectacles would be without the fog of superstition!

[[Column: "Science and religion aren't friends," USA Today, updated October 11, 2010.
________
(NB: It may be useful to compare Casey Luskin's survey of the sides in the emerging controversy here, and to examine Vox Day's rebuttal to the wider New Atheist claims, in his The Irrational Atheist, here, and in print elsewhere. This work will, among other points, highlight that "progress" is not to be confused with improvement, with many ghosts from the last century as key witnesses. Similarly, it is helpful to look at Dan Peterson's remarks here on key contributions of the theistic, Judaeo-Christian worldview and civilisation to the foundation of Modern Science, and their significance for the design controversy. As for the deeper, "lack of evidence for God" theme so characteristic of New Atheists, one might find it helpful at introductory level to read here on the underlying selective hyper-skepticism, here on the philosophical debate on evidence and argument pointing to God, here on the problem of evil [[also raised by Mr Coyne in his article], and here on evidence on the specific evidence and warrant relating to Biblically anchored Christian faith.)]
 These articles thus reveal a pattern of ideological prejudice and anti-theistic bias that in effect seeks to redefine science and censor its methods through what boils down to imposed, censoring a priori, evolutionary materialist atheism.  

Then, such advocates -- having locked out evidence and otherwise reasonable possible explanations before the facts are allowed to speak -- now wish to turn around and showcase the confidently announced and headlined results of such ideological domination of scientific, science education and public outreach institutions as "proof" of the correctness of their ideology. 

On fair comment: that unfortunate circumstance is little more than a grand exercise in ideological imposition and question-begging by a new "magisterium" dressed up in lab coats.

This ideologising of origins science therefore lends telling force and context to the following declaration in the 2008 version of a well known, long-running US National Academy of Sciences (NAS) booklet:

In science, explanations must be based on naturally occurring phenomena. Natural causes are, in principle, reproducible and therefore can be checked independently by others. If explanations are based on purported forces that are outside of nature, scientists have no way of either confirming or disproving those explanations. Any scientific explanation has to be testable — there must be possible observational consequences that could support the idea but also ones that could refute it. Unless a proposed explanation is framed in a way that some observational evidence could potentially count against it, that explanation cannot be subjected to scientific testing. [[Science, Evolution and Creationism, 2008, p. 10 Emphases added.] 
 
The US National Science Teachers Association [[NSTA] as of July 2000, and over the signature of its Board of Directors, is even more explicit in making the same question-begging imposition of naturalism through its radical redefinition of the nature of science for educational purposes:

The principal product of science is knowledge in the form of naturalistic concepts and the laws and theories related to those concepts . . . .

[[S]cience, along with its methods, explanations and generalizations, must be the sole focus of instruction in science classes to the exclusion of all non-scientific or pseudoscientific methods, explanations, generalizations and products . . . .

Although no single universal step-by-step scientific method captures the complexity of doing science, a number of shared values and perspectives characterize a scientific approach to understanding nature. Among these are a demand for naturalistic explanations supported by empirical evidence that are, at least in principle, testable against the natural world. Other shared elements include observations, rational argument, inference, skepticism, peer review and replicability of work . . . .

Science, by definition, is limited to naturalistic methods and explanations and, as such, is precluded from using supernatural elements in the production of scientific knowledge. [[NSTA, Board of Directors, July 2000. Emphases added.]

By strongest contrast with such attempted dismissals by the NAS and NSTA etc, the design inference (a major focus for their ire) is actually a legitimate inductive argument made based on a well supported empirical observation. For we routinely observe that intelligent agents act into our world, and when they do so they often leave characteristic signs of art-ificial -- or, intelligent -- action; such as functionally specified, complex information.
 
Thus: on empirical evidence and empirically reliable well-tested signs, we may properly and reasonably  contrast "natural" causes traceable to chance and/or mechanical necessity from "intelligent" or "artificial" -- as opposed to "supernatural" -- causes. 

So, on the strength of this very well-supported observation, and the common-sense principle that "like causes like," it is a well justified and properly scientific induction to infer from such observed signs to the action of such agents. This, regardless of possible onward worldview level implications and debates -- which it is no business of science to censor itself over. 

That is, the contrast Lewontin, Sagan and Coyne (among many others) have drawn between a materialistic world of scientifically warranted, "progressive" truth and an irrational clinging to superstitious belief in the “demonic” supernatural, is not just an odd personal view, but instead, a fallacy that (regrettably) holds growing official support in leading scientific and educational institutions

University of California law professor Philip Johnson's response to Lewontin in November 1997 is therefore quite relevant:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose."  

. . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

In short, the root problem is not the evidence as such, but the a priori imposition of ideological materialism on origins science. 

Worse, Lewontin and others apparently do not realise that the claim, assumption or inference that “science [[is] the only begetter of truth” is not a claim within science but instead a philosophical claim about how we get warranted, credibly true belief, i.e. knowledge. So, they have contradicted themselves: appealing to non-scientific knowledge claims to try to deny the possibility of knowledge beyond science! 
 
An obvious first corrective step, then, would be to critically survey origins science -- paying due attention to sound philosophical insights and techniques such as inference to best explanation -- thus allowing the evidence to speak for itself. That is therefore the purpose of this course, in both the general/high school level survey form (this unit) and the college/reference level form (following units).

D: The significance of complex, functionally specific information/ organisation

The observation-based principle that complex, functionally specific information/ organisation is arguably a reliable marker of intelligence and the related point that we can therefore use this concept to scientifically study intelligent causes will play a crucial role in that survey.  For, routinely, we observe that such functionally specific complex information and related organisation come-- directly [[drawing a complex circuit diagram by hand]  or indirectly [[a computer generated speech (or, perhaps: talking in one's sleep)] --  from intelligence. 

In a classic 1979 comment, well known origin of life theorist J S Wicken wrote:

Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

The idea-roots of the term "functionally specific complex information" [FSCI] are plain: "Organization, then, is functional[[ly specific] complexity and carries information."

Similarly, as early as 1973, Leslie Orgel, reflecting on Origin of Life, noted:
 . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . .
[HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [--> this is of course equivalent to the string of yes/no questions required to specify the relevant "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).]  One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions.  [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course, that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here. Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 - 5 in the just linked. Finally, Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 - 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken's remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]
Thus, the concept of complex specified information -- especially in the form functionally specific complex organisation and associated information [FSCO/I] -- is NOT a creation of design thinkers like William Dembski. Instead, it comes from the natural progress and conceptual challenges faced by origin of life researchers, by the end of the 1970's. 

 Indeed, by 1982, the famous, Nobel-equivalent prize winning Astrophysicist (and life-long agnostic) Sir Fred Hoyle, went on quite plain public record in an Omni Lecture:
Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ --> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]
So, we first see that by the turn of the 1980's, scientists concerned with origin of life and related cosmology recognised that the information-rich organisation of life forms was distinct from simple order and required accurate description and appropriate explanation. To meet those challenges, they identified something special about living forms, CSI and/or FSCO/I. As they did so, they noted that the associated "wiring diagram" based functionality is information-rich, and traces to what Hoyle already was willing to call "intelligent design," and Wicken termed "design or selection." By this last, of course, Wicken plainly hoped to include natural selection.

But the key challenge soon surfaces: what happens if the space to be searched and selected from is so large that islands of functional organisation are hopelessly isolated relative to blind search resources?

For, under such "infinite monkey" circumstances , searches based on random walks from arbitrary initial configurations will be maximally unlikely to find such isolated islands of function. As the crowd-source Wikipedia summarises (in testimony against its ideological interest compelled by the known facts):


The text of Hamlet contains approximately 130,000 letters. Thus there is a probability of one in 3.4 × 10^183,946 to get the text right at the first trial. The average number of letters that needs to be typed until the text appears is also 3.4 × 10^183,946, or including punctuation, 4.4 × 10^360,783.

Even if the observable universe were filled with monkeys typing from now until the heat death of the universe, their total probability to produce a single instance of Hamlet would still be less than one in 10^183,800. As Kittel and Kroemer put it, “The probability of Hamlet is therefore zero in any operational sense of an event…”, and the statement that the monkeys must eventually succeed “gives a misleading conclusion about very, very large numbers.” This is from their textbook on thermodynamics, the field whose statistical foundations motivated the first known expositions of typing monkeys.[3]


So, once we are dealing with something that is functionally specific and sufficiently complex, trial-and error, blind selection on a random walk is increasingly implausible as an explanation, compared to the routinely observed source of such complex, functional organisation: design. Indeed, beyond a certain point, the odds of trial and error on a random walk succeeding fall to a "practical" zero.

The key to the functional complexity and specificity issue -- and identifying that "certain point" -- lies in Wicken's  comparison to wiring diagrams for electrical circuits. For instance, we may compare (a) an outline map of a petroleum refinery  with (b) the similar pathways of the living cell's metabolic reaction pathways:



(a) Flow diagram for an Oil Refinery, showing the nodes, interfaces and and arcs “wiring diagram” architecture (Source (full size): Wikipedia, GNU)
(b) Outline of the metabolic reactions network for the living cell (Source (full size): Raymond Cheong, fair use. More detailed chart here. Online Biochem text here. Cf. also quite similar node, interface and arc patterns in gene regulatory networks for a sea urchin embryo here. (Explanation, here.) )
Fig. I.2: A comparison of the “wiring diagrams” for (a) a petroleum refinery and (b) the – outline level -- chemical reaction pathways in a typical living cell.

As the double figure above shows, both the refinery's functionally specific organised flow net of reactions -- which here  implies a fairly narrow selection from a much wide range of physically possible configurations of parts (thus: a "hot" or "target" zone) -- and the cell's network of metabolic reactions form a pattern in which particular nodes are joined by a network of connecting arcs. 

That means that once we specify (i) the nodes, (ii) their interfaces to the arcs, and (iii) the set of arcs so joined to each node, we have specified the “wiring diagram” for the network that forms the basis for its function, especially where such function is not a matter of simple  physical resemblance, but results from interactions and cause-effect chains among the nodes

(This is a familiar exercise for anyone who has had to design, lay out, etch and then populate and solder together an electronics circuit board. [[A computer motherboard is a familiar example of such a circuit board.])

This “wiring diagram” approach is actually quite general purpose and powerful:

i --> The specific sequence of letters and numerical characters etc. in a word or sentence is such a network, usually called a string because the arcs and nodes form a linear sequence, like beads on a string: L-e-t-t-e-r-s.

ii --> In the refinery and metabolic networks just above, up to several arcs join to each node, and the whole forms an integrated network, perhaps something like a network of veins on a leaf or the mesh of lines and knots in a fishing net.


iii --> Perhaps surprisingly, we can model complex three-dimensional objects like the Old Man of the Mountain and the Mt Rushmore statues (or “exploded”- view diagrams of machines, etc.) as networks of connected points, sometimes called wire-frame models. Surface facets or skins and textures can be digitally draped over these “wire- frames.” The realism and specificity to a recognisable individual depend on how tightly spaced the network of fixed points is:




(a) A generic dolphin- like shape can be specified with relatively few nodes and arcs. (Source: Wiki, GNU)
(b) Marquadt Golden Ratio- based beauty mask mesh superposed on Queen Nefertiti's face (Source: MBA, fair use. [[Cf. also here, here.])
Fig. I.3: Wireframe networks for images. 
 
iv --> In each case, we can identify a chain of basic yes/no decisions or selections that specify the nodes [[which can be complex 3-D objects!], the connecting arcs and the ways the two interact. Then, we apply a rule of thumb: if the number of such basic yes/no decisions to build a sufficiently specific and functional network exceeds 500 - 1,000, we pass a reasonable threshold for “complexity.”

REASON: The number of possible configurations specified by 1,000 yes/no decisions, or 1,000 bits, is ~ 1.07 * 10^301; i.e. “roughly” 1 followed by 301 zeros. While, the ~ 10^80 atoms of the observed universe, changing state as fast as is reasonable [[the Planck time, i.e. every 5.39 *10^-44 s], for its estimated lifespan -- about fifty million times as long as the 13.7 billion years that are said to have elapsed since the big bang -- would only come up to about 10^150 states. Since 10^301 is ten times the square of this number, if the whole universe were to be viewed as a search engine, working for its entire lifetime, it could not scan through as much as 1 in 10^150 of the possible configurations for just 1,000 bits. That is, astonishingly, our “search” rounds down very nicely to zero: effectively no “search.” [[NB: 1,000 bits is routinely exceeded by the functionally specific information in relevant objects or features, but even so low a threshold is beyond the credible random search capacity of our cosmos, if it is not intelligently directed or constrained. That is, the pivotal issue is not incremental hill-climbing to optimal performance by natural selection among competing populations with already functional body forms. Such already begs the question of the need to first get to the shorelines of an island of specific function in the midst of an astronomically large sea of non-functional configurations; on forces of random chance plus blind mechanical necessity only. Cf. Abel on the Universal Plausibility Bound, here.] 
 
 To get a sculptural face that looks closely like that of George Washington or Nefertiti [[i.e. we have defined a specific function], a dense network of quite precisely located points has to be set up; so that a smooth, accurate portrait can be made. [[by contrast, Old Man of the Mountain or anything reasonably close would be recognisable as somewhat face-like, and would be “acceptable”; so it is not anywhere nearly so tightly specified. That's why with a spot of imagination, one can easily see face-like figures in wood paneling, clouds in the sky, and in brown marks on toast.] 

By contrast (and as has been raised as an objection), when we look at other cases like say a fossil cast mould, the close resemblance to the presumed "natural" organism is readily accounted for on non-sculptural observed forces and factors acting in the environment: rapid burial by water or volcanic ash, cementitious rock matrix, solidification of a reasonably accurate mould before decay can act, etc.  

(NB: We must observe carefully. While the preserved shape may indeed be functionally specific and complex, fossilisation does not account for the actual origin of the shape of the life-form discovered in the fossil bed.)
So, we note that possible signs of design should always be assessed in the context of a methodical explanatory analysis that first addresses the action of known and reasonably possible forces of chance and mechanical necessity. That is, we must always use an "explanatory filter" that first eliminates chance and mechanical necessity as plausible causes of the feature of interest, before turning to the possibility of design.  
(NB: This is why a recent attempted objection that the complex pattern of iron filings on a sheet of paper with a magnet under it the complex and specified but not designed, fails: the observed effect is directly explained by chance and mechanical necessity, so the question of design does not arise: intelligently directed contingency is not an issue where blind forces of chance and necessity -- on our observation -- are adequate. As well, the part that is specific is simple, and the part that is complex is random. The same is true for the shape of a snowflake: what is complex -- the elaborate details --  is due to random local atmospheric conditions, and what is simply specified -- the six-sided shape -- is due to forces of necessity based on the structure of the water molecule. The same holds if we were to say drop a double-handful of dice to a table: the simple law of necessity that drives the falling is distinct from the random complexity of the values the dice would take, at random. But, if we were to come back an hour later and see the same set of dice now all set to read 1 or some other recognisable pattern, we would instantly believe that they had been set to read like that by intelligent, deliberate action. )
In the case of the fossil mould or the like, we need to again underscore the key subtlety: by the very nature of such a mould, the mould is not the true source of the shape information it captures and transmits (with some loss of detail). 

This points straight back to the question of where the complex, functional biological information that shaped the body plan of the fossilised creature originally came from. In turn, this leads to the question of the root source of the genetic and regulatory information and organisation in the underlying self-replicating living cells in the organism that gave its shape to the fossil mould.  However, since that is in the remote, unobserved past, we need to first ground our thinking by identifying empirically reliable causal patterns that we may directly observe in the present. Once that is done, we may then reasonably argue that similar phenomena in the past are best explained on the reliable pattern.

There is a second subtlety, too (as Newton emphasised in his 1704 work on experimental scientific methods, Opticks, Query 31): as is true of all significant findings of science, the conclusions we may reach are provisional, i.e. subject to clarification and correction in light of further evidence and analysis. 
 
Now, in the case of an integrated process such as the refinery, once we see that the materials of the process units, how they are put together, the valves, the pipelines, the properties of the chemical components of the crude oil put in and the control system all have to be specified, it easily exceeds the 1,000- bit threshold. So does the metabolic pathway network, once we see how it works in the context of a living cell:



Fig. I.4: A neuron, an example of a living cell. (Source: Wikimedia commons.) 
 
Video tour:



As Denton describes such a cell:

To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter [[so each atom in it would be “the size of a tennis ball”] and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell. 

We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines . . . . We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of components, error fail-safe and proof-reading devices used for quality control, assembly processes involving the principle of prefabrication and modular construction . . . . However, it would be a factory which would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours . . . . 

Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell's manufacturing capability is entirely self-regulated . . . .

[[Denton, Michael,
Evolution: A Theory in Crisis, Adler, 1986, pp. 327 – 331. This work is a classic that is still well worth reading. Emphases added. (NB: The 2009 work by Stephen Meyer of Discovery Institute, Signature in the Cell, brings this classic argument up to date. The main thesis of the book is that: "The universe is comprised of matter, energy, and the information that gives order [[better: functional organisation]  to matter and energy, thereby bringing life into being. In the cell, information is carried by DNA, which functions like a software program. The signature in the cell is that of the master programmer of life." Given the sharp response that has provoked, the onward e-book responses to attempted rebuttals, Signature of Controversy, would also be excellent, but sobering and sometimes saddening, reading.) ]

A simpler approach is to address digital symbols strings that are meaningful as language or as algorithmic instructions or data, as the coded information is inherent to the function.  In this case, we have an Internet full of cases to show that once we directly know the source, such digitally coded functionally specific complex information [[dFSCI for short] is a reliable sign that strongly points to its routinely observed cause, intelligence. For instance, did this paragraph (a fairly long functionally specific string of ASCII characters) most reasonably came from (a) a burst of “lucky noise” hitting a hard drive, or (b) was it produced by the undirected forces and laws of nature, or (c) was it written by an intelligent author? 
 
The answer is obvious:

(EXPLANATION: Relatively few of the 1.32 * 10^1,555 possible arrangements of 738, 128-state ASCII characters – as are used in the above paragraph beginning “A simpler approach . . .” – will make sense as English text and fit into the context of this discussion. This might be a large number; but it is also indisputably an indescribably tiny fraction of the number of possible configurations of 738 ASCII characters. So, if it were to act without intelligent direction, the whole observed cosmos acting as a search machine would not reasonably be expected to come up with the above paragraph or something near enough to it to be functional. And yet, intelligent agents such as ourselves routinely carry out such feats. Digitally coded functionally specific complex information [[dFSCI for short] is a routinely observed characteristic of intelligent causes.

But also, dFSCI and many similar apparent signs of intelligence appear in origins science related cases such as: (i) the origin of cell-based life that is based on information-rich macromolecules, or (ii) of the origin of body-plan level biodiversity, or (iii) that of a cosmos that seems to be fine-tuned in many ways to accommodate such life. However, we simply were not there to observe what actually happened:




So (as the above video points out) , we have to look at the evidence we have today, and then seek to explain the body of evidence based on what we know about (1) the forces and laws of nature, (2) chance circumstances, and (3) how intelligent designers act.  


(Yes, how intelligent designers are observed to act. We are undeniably intelligent and designing, on pain of absurdity. Nor, can we explain away the capacity to design on our simply being embodied and/or having sufficiently complex brains. Just consider: the computer you may be viewing this on was the product of many teams of highly intelligent and knowledgeable designers of its hardware and software, and before them, generations of researchers in many related fields of science, engineering and mathematics.)

In the case of the living cell, the action of mRNA and tRNA in the Ribosome during protein synthesis is especially striking:






Fig. I.5 (a): Protein translation in the cell, showing  mRNA acting as a discrete- state -- i.e. digitally -- coded tape that allows successive tRNA molecules carrying attached amino acids to chain a protein, step by step. (Source: Wiki, public domain.)




Fig. I.5(b): Vuk Nikolic's fairly detailed video animation of the protein translation process (Source: VN, Vimeo)
 
An inference to intelligent causes thus seems reasonable, at least as a possibility.
But, the methodological naturalism school of thought in effect now tells us: “you can't appeal to observed causal factor no 3 on matters of origins. You may only do so at times and places where we can put forward a chain of theories or models on how life has already spontaneously originated by factors (1) and (2) only, and has then evolved into intelligent forms.” 
(Why? Not because we independently know that through direct observation, but because appealing to (3) on origins might “allow a Divine Foot in the door.” Or, more subtly, “the supernatural is not subject to repeatable observation and empirical testing, nor is it predictable – persons make different decisions and act differently in similar situations; so we “can't” have the supernatural intruding into our scientific understanding of our origins.” Or, even: we pay scientists to look for the natural causes of natural events [["with great success" and "for centuries"] and pay teachers to teach science, not "another subject."  However, just to try to define just what words or phrases like “natural,” “supernatural” and “natural causes” mean on the assumption that methodological naturalism defines science,  without improperly assuming or asserting philosophical materialism -- as Lewontin advocated and as Johnson rightly rebuked -- is quite a challenge. (Cf. Remarks and exchanges here.))
In short, the imposition of methodological naturalism is plainly question- begging, and it is thus an error of method.  Another way to see this was recently pointed out by Uncommon Descent Blogger Barry Arrington:
Today, for the sake of argument only, let us make two assumptions:
1.  First, let us assume that the design hypothesis is correct, i.e., that living things appear to be designed for a purpose because they were in fact designed for a purpose.

2.  Second, let us assume [[presumably, by the "rule" of methodological naturalism] that the design hypothesis is not a scientific hypothesis, which means that ID proponents are not engaged in a scientific endeavor, or, as our opponents so often say, “ID is not science.”
From these assumptions, the following conclusion follows:  If the design hypothesis is correct and at the same time the design hypothesis may not be advanced as a valid scientific hypothesis, then the structure of science prohibits it from discovering the truth about the origin of living things . . . .
No one can know with absolute certainty that the design hypothesis is false.  It follows from the absence of absolute knowledge, that each person should be willing to accept at least the possibility that the design hypothesis is correct, however remote that possibility might seem to him.  Once a person makes that concession, as every honest person must, the game is up.  The question is no longer whether ID is science or non-science.  The question is whether the search for the truth of the matter about the natural world should be structurally biased against a possibly true hypothesis. [["What if it's true?" Uncommon Descent, Aug. 6, 2010. (Emphasis added.)]
For, we did not – and cannot -- directly observe the remote past, so origins science theories are in the end attempted “historical” reconstructions of what we think the past may have been like. Such reconstructions are based on investigating which of the possible explanations seems "best" to us on balance in light of the evidence. However, to censor out a class of possible explanations ahead of time through imposing materialism plainly undermines the integrity of this abductive method
 
Thirdly, causal factor (3) is about intelligence and its observable traces, not whether or not “supernatural” beings or immaterial minds actually exist and/or may act into our world. That is, the natural vs. supernatural contrast made by the NAS and Lewontin etc., is also a distractive strawman argument. Instead, a more appropriate contrast for scientific investigation on origins is: natural vs. ART-ificial (i.e. intelligent). Also, since we ourselves are intelligent, purposeful and inventive creatures – but cannot properly assume that we (or similar beings) are the only possible intelligences – we may easily see that intelligence is identifiable, recognisable and even definable as: 
 
“. . . capacities to reason, to plan [[which entails (i) to purpose, (ii) to conceive or imagine a path to achieve it and (iii) to set out steps to fulfill the path], to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.” [[Wikipedia: article, “Intelligence.”]

So, once we can observe behaviours, events and objects that show characteristic signs of intelligent activity, we can then investigate the signs scientifically. (In short, the scientific validity of signs of intelligence is independent of the philosophical debates that may be triggered by resulting findings and related conclusions. Just as, since the 1920's, scientists came to accept that the observed expansion of the universe points to a beginning; even though that suggests that it might have had a Begin-ner. Logically, investigating whether there are reliable signs of intelligence, what they are, and what they signify should come before debates on “whodunit.”) 
 
Indeed, in his recent announcement of a vision/ purpose statement for the Evolutionary Informatics Lab, William Dembski, a leading Design theorist, has observed:

Intelligent design is the study of patterns in nature best explained as the product of intelligence . . . Archeology, forensics, and the search for extraterrestrial intelligence (SETI) all fall under this definition. In each of these cases, however, the intelligences in question could be the result of an evolutionary process. But what if patterns best explained as the product of intelligence exist in biological systems? . . . By looking to information theory, a well-established branch of the engineering and mathematical sciences, evolutionary informatics shows that patterns we ordinarily ascribe to intelligence, when arising from an evolutionary process, must be referred to sources of information external to that process [[nb: as it is not seriously credible that complex algorithmic or linguistic, specifically functional information comes about by in effect “lucky noise”]. Such sources of information may then themselves be the result of other, deeper evolutionary processes. But what enables these evolutionary processes in turn to produce such sources of information? Evolutionary informatics demonstrates a regress of information sources. At no place along the way need there be a violation of ordinary physical causality. And yet, the regress implies a fundamental incompleteness in physical causality’s ability to produce the required information. Evolutionary informatics . . . thus points to the need for an ultimate information source qua intelligent designer. [[Emphases added.]

So, whether or not one is inclined to accept the above Dembski formulation, one should be at least aware of this perspective and its rationale if one is to be properly educated on origins science. (NB: similarly, one should know how informed Creationists have worked out their various Young Earth or Old Earth Creation views and how they respond to their critics, e.g. here, here, here, here and here.) 

 E: Scientific Methods and Explanatory Filters

A reasonable way to begin is with an expanded view of
the generic scientific method of investigation as we look at different aspects of observable objects or events:
  

Fig. I.6: An expanded, causal factors- and empirical signs- based view of the generic scientific method. (IOSE) [[NB: It may be a helpful thought-provoker to compare this critical discussion by Peirce on the commonly held doctrine of necessity in nature.]

Moreover, one cannot effectively assess origins science issues in isolation, but must examine the overall picture in light of a cluster of relevant disciplines. For instance, in a recent comment, Dr Cornelius Hunter has observed on the discussion of Fig. 17.3  in the 5th Edn of the Johnson and Losos text, The Living World (McGraw Hill, 2008), where the authors claim:
It is important not to miss the key point of the result you see illustrated in figure 17.3: evolution is an observation, not a conclusion. Because the dating of the samples is independent of what the samples are like, successive change through time is a data statement. While the statement that evolution is the result of natural selection is a theory advanced by Darwin, the statement that macroevolution has occurred is a factual observation.
Hunter aptly rebuts:
A sequence of fossils is an observation of macroevolution? It would be difficult to imagine a more misleading statement than this. And it is not as though this was an unintended mistake that just happened to elude the 100+ reviewers. Johnson and Losos went out of their way to make and elaborate this message, and the army of evolutionist reviewers all nodded their heads. [[Emphasis added.]
 For, first, events claimed to have happened 50 - 35 million years in the deep past are  simply not open to direct observation; as, we were not there to see for ourselves, nor do we have generally acceptable and credible record of the true facts from those who were.

Notwithstanding, the claim is being made that the dating is "independent" of the reconstructions and artistic photo-paintings made based on fossils recovered in certain layers of rocks.  [[This is not quite true, as Part II below (on cosmology and timelines) will show, and as
this notorious case demonstrates; i.e. to a sometimes worrying extent, dates of rocks and fossils are mutually dependent and premised on the generally accepted picture of evolution across time, even when radioactivity methods are used.]

Then, on the strength of this "independence," the young student looking for instruction in Biology is told by professors, publishers and reviewers, that macroevolution -- presumably meant to imply the grand microbes to man evolutionary claim (not just the illustrated claim of a sequence of fossil species) -- is an observed "fact." So, a tower of assumption-laden inferences is now projected onto the past, from the present observations of fossilised animal remains in layers of rock and probably radioactive dating estimates on associated volcanic rocks. Finally, it is presented to us as if it were a direct factual observation of the deep past; and, the subtext is plain:
only fools dispute facts.

Outrageous!


For,
the key danger to such tactics is that they can actually close our minds to the true facts when we meet them.

This is because, if we confuse a speculative and possibly error-prone tower of inferences for a "fact," we will then be inclined to think the real facts that contradict such errors are false, simply because they cut across what we have been led to perceive as "fact." This danger becomes  even more evident when we meet the sort of attitude expressed in a Wikipedia
discussion on the claimed "fact" of evolution:
. . . When scientists say "evolution is a fact" they are using one of two meanings of the word "fact". One meaning is empirical, and when this is what scientists mean, then "evolution" is used to mean observed changes in allele frequencies or traits of a population over successive generations.

Another way "fact" is used is to refer to a certain kind of theory, one that has been so powerful and productive for such a long time that it is universally accepted by scientists.
When scientists say evolution is a fact in this sense, they mean it is a fact that all living organisms have descended from a common ancestor (or ancestral gene pool) [8] even though this cannot be directly observed. [["Evolution as theory and fact," coloured emphasis added. Acc: Aug. 7, 2010.]

The first use simply describes observed minor variations in life forms that are sometimes called micro-evolution, which is indeed a non-controversial fact. The second usage, however, is plainly loaded with precisely the sort of  confusion of towers of inference for actual observed objective facts that can potentially close our minds to the truth. In addition, it appeals to the naked authority of the particular school of thought and philosophy that dominates a given day. Indeed, by improperly using the term "universally," it actually tries to disenfranchise qualified but dissenting scientists.

Therefore, we must always be very careful indeed to distinguish actual credible facts of direct observation from inferences built on them, and we must always be open to the possibility that what we think are facts -- especially on matters that we cannot directly observe -- may just possibly embed an error or two.  That immediately means that a critical survey like this can play an important role in building a balanced understanding of current thought on origins.


In addition, the use of such tactics in the name of education also means that in this critical survey course we need to address not only issues in Biology, but also in linked Geology, Geophysics and Geochemistry; as well as in related Astrophysics and Cosmology.  For, we need to develop sufficient familiarity with the scientific, knowledge, and worldview level issues associated with claims like that to critically assess them for ourselves and draw our own conclusions as informed citizens, as educators and as students.


F: Focus of the IOSE course

Therefore, this independent origins science course – bearing in mind various perspectives and controversies -- will seek to clarify and discuss
the decisive scientific facts, theories, ideas, issues and alternatives on origins, regarding: 
 
1] Overview and significance of origins Science ideas and issues – Origins Science as the scientifically informed study and reconstruction of our “roots,” from hydrogen to humans. Thus, its inextricable connections to worldview level issues and scientific methods issues. Can we scientifically reconstruct a deep past that we did not observe, based on traces in our present? If so, with what degree of credibility or certainty? What may we then conclude from, for instance, the apparent fine-tuning of the observed cosmos for cell-based life? Or, from the role that complex, code-based, functionally specific information plays in such life?

2] Cosmological origins -- setting the stage for cell-based life, through 
reconstructed origins of the apparently fine-tuned cosmos, galaxies and stars, and solar systems origins; associated modelling of the past of the cosmos and questions on dating and timelines. What is the key empirical evidence, and how reliable are such reconstructions and timelines? 
 
3] The origin of life -- classical and current views on Origin of Life, in light of the origination of complex functionally specific information required to create self-replicating, cell based life. The key challenge: where did functionally specific, code-based complex biological information – e.g. the genetic code, the amino acid sequences of proteins -- come from?

4] The origin of biodiversity -- Major historic and current views on, theories of, and models for the origin of body plan level biodiversity; issues over chance, necessity and design, and alternatives. Are “icons” of evolution and the action of chance variation and natural selection able to answer the question of where complex body-plan originating biological information comes from?

5] Origin of mind the nature and roots of mind, with associated issues on the significance of morality. Does mind reduce to neurological matter in electrochemical “motion” under forces tracing to chance and necessity? If so, what are the grounds of knowledge, reasoning and moral principles? If not, what would that imply about the nature of mind, morality and humanity?

6] Origins Science in Society -- Implications of alternative views on origins of the cosmos, of life and of man for morality, policy, law and society. Since “ideas have consequences,” what are the consequences of the rise and dominance of evolutionary materialism in our civilisation? In light of the balance of the science, the opinion-shaping stories and the power-plays, what should we then do? How, why?

 G: A step-by-step overview of the IOSE course

Such questions need solid, fair answers of responsible length. However, a brief initial “bullet point” step- by- step summary is also helpful:
a --> While we may and do use scientific techniques to reconstruct the remote past (using the logic of inference to best current explanation in light of observed evidence), we cannot properly hold that such scientifically informed models of what the far past may have been like (on scientific principles and evidence) are certain beyond reasonable dispute or correction.
b --> Given the surprisingly important role of worldview ideas in many such reconstructions [[e.g. as Lewontin and the US National Academy of Science illustrate], we need to be willing to compare various possible explanations in light of evidence rather than a priori exclude alternative views.
c --> For instance, the only known, observed source of complex, functionally specific organisation and related information is intelligence, and the combination of complexity and specificity makes happenstance and mechanical forces a maximally implausible explanation; once we do not impose the criterion ahead of time that explanations that may be uncomfortable for materialists are to be ruled out without consideration.
 d -->  Current thinking on cosmology provides the backdrop for origins science studies. On the main school of thought, the observed cosmos [[which may be only a part of the universe as a whole], began in a big bang some 10 – 20 billions of years ago, and has been expanding ever since. Within that context,  our barred-spiral galaxy-- the Milky Way --  is about 100,000 light years across, and is often held to be about 12 BY old, with some 200 billion stars (and perhaps 10^65 atoms) in it.
e --> Within the Milky Way, our solar system is thought to have formed about half-way towards the rim, on a spiral arm, some 4.5 – 5 BY ago; as a second generation, heavy element-enriched star system that orbits the centre once every 225 – 250 million years.
f --> Earth is held to be about 4.6 BY old, based on radiodating of meteorite fragments. The first traces of life on earth have been dated at 3.5 – 3.8 (or possibly up to 4.2) BY. (Dating techniques, however, face the same basic challenge of reconstructing a remote, unobserved past as general origins science; so, such dates must be taken with a due measure of caution.).
g --> Carbon chemistry, cell based life depends on a large variety of tightly coordinated, information-based macromolecules, most notably DNA, RNA, proteins and enzymes. In turn, on the big bang model, a significant number of cosmological parameters seem to require quite precise tuning to set up a context in which such life is possible. 


h -->  These issues have led to debates on the possible role of intelligence in the origin of both life and the fine-tuned cosmos that accommodates it. For instance, it seems credible to a significant number of prominent thinkers on cosmology  that many parameters and features of the observed universe have to be jointly fine-tuned to provide a cosmos that is inhabitable by Carbon-chemistry, intelligent, cell-based life:




 Fig. I.7 (a): Is the observed Cosmos fine-tuned for life? (Source: Adapted from 2001principle.net, with corrections.) 







Fig. I.7(b): Dr Michael Strauss discussing the significance of cosmological fine tuning. (NB: Cf. a more detailed discussion of the fine  tuning and possible worldview level implications here.)



Fig. I.7(c): A critical discussion of Hawking-style (and broadly similar) multiverse speculations, by Spitzer and Gordon.





Fig. I.7(d): An excerpt on the privileged planet coincidences [[
Privileged Planet at Amazon]
 i --> Indeed, so striking is the evident fine-tuning that famous astronomer Sir Fred Hoyle (winner of a Nobel-equivalent prize for Astronomy, and a life-long agnostic, who originated the term "big bang," which he meant to deride in favour of his preferred Steady State Universe hypothesis) was moved to comment on the crucial carbon atom:
Would you not say to yourself, `Some super-calculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule?' Of course you would.  . . .  A common sense interpretation of the facts suggests that a superintellect has monkeyed with the physics, as well as chemistry and biology, and that there are no blind forces worth speaking about in nature. I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [["The Universe: Past and Present Reflections." Engineering and Science, November, 1981. pp. 8–12]
j --> Now, following the work of Darwin and Wallace from the 1850's on, it has been widely accepted that the diversity of life we see on earth traces to unlimited chance variations culled and selected for different ecological niches through being more or less “fit” across time [[i.e. “natural selection”].
k --> As Darwin wrote in his conclusion to Origin of Species:
It is interesting to contemplate a tangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent upon each other in so complex a manner, have all been produced by laws acting around us. These laws, taken in the largest sense, being Growth with Reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the conditions of life and from use and disuse: a Ratio of Increase so high as to lead to a Struggle for Life, and as a consequence to Natural Selection, entailing Divergence of Character and the Extinction of less-improved forms. Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved. [[Origin, Ch 15. Emphasis added.]

l --> Over the years, starting with the tree of life diagram (the only diagram in Origin), and the archaeopteryx, neandertal and horse fossils, many iconic cases have featured in museums and textbooks, but in fact such serve more to illustrate than to demonstrate the claimed gradualistic, blind chance- and necessity- controlled origin of body-plan level biodiversity. Also,  in too many cases, the illustrations –as the just linked discusses – are questionable. For instance: (i) we have no good grounds for imagining the neandertal to be any less a rational animal than we ourselves are, (ii) the horse fossil sequence accounts for only a modest shift in size, in dentition and in number of toes, two of which are at least occasionally found in horses today, and (iii) back and forth colour changes of moths or finch beak size changes have little to do with spontaneous origin of novel body plans and associated huge quantities of bio-functional, coded information. The iconic case of the origin of the bacterial flagellum is an excellent case study:




m --> On fair comment, then, we may therefore note that at no point has it been observationally shown that functionally specific complex organisation and its associated information, are produced by undirected forces of chance and necessity acting on materials under initial conditions that “just happened to be that way, could easily – and without material difference – have been otherwise.” Consider this video exchange:




n --> Now, routinely, functionally specific complex organisation/ information [[FSCI] is observed being produced by intelligent designers; e.g. this set of notes. Thus, we have a strong induction that FSCI is a reliable sign of intelligence, and so also the right to infer from the sign to its known best explanation: design.
o --> Further to this, cell-based life is observed to use DNA, starting from about 100 – 500,000 four-state base pairs: A/G/C/T (bacteria), and typically ranging up to 3 – 4 billion bases (humans, apes). The digitally coded information stored in DNA is transcribed to RNA and is processed by associated molecular nano-machines, which are functionally and specifically organised in very complex ways. This coded information, in total,  is greatly in excess of the 1,000 bit FSCI threshold. As Yockey recently  showed, it is also functionally integrated into a recognisable communication network in the cell:




 Fig.  I.8: Yockey's representation of the DNA processing system as a Shannon-style information communication system that transfers genetic messages from DNA to proteins, with possibility of noise.  [[NB: This shows how DNA is not just "analogous" to a code but is a part of a digitally coded information system; complete with possibilities for noise.]  (Source:  Yockey, Hubert, Information Theory, Evolution, and the Origin of Life, Cambridge University Press, 2005. Fair use. )

p -->
For further instance, NewScientist was compelled to observe in a recent article on Mycoplasma pneumoniae , one of the simplest known bacteria, that “[[t]he inner workings of a supposedly simple bacterial cell have turned out to be much more sophisticated than expected.” 
 
Fig. I.9: Mycoplasma pneumoniae, one of the smallest bacteria-like organisms capable of independent self-replication, with 689 genes.  (Source: New Scientist, under fair use.)
[[NB: New Scientist comments: “This "CT scan" shows the "bodily" positions of molecular motors vital to a Mycoplasma pneumoniae cell [[thus, arguably, illustrating a case of irreducible complexity]. They include ribosomes (yellow) for making proteins, chaperonins for folding them (red), RNA polymerases for "reading" genes and pyruvate dehydrogenase enzymes for producing energy ” Luis Serrano of the European Molecular Biology Laboratory (EMBL) observed that "At all three levels [[as investigated: (i) “the RNA molecules, or transcripts, produced from its DNA,” (ii) “the metabolic reactions that occurred in it,” and (iii) “every multi-protein complex the bacterium produced”], we found M. pneumoniae was more complex than we expected."] 
q --> At the next level, it is credible that to implement the cell types, tissues, organs and overall organisation of a new body plan, 10's – 100's of millions of functional base pairs of additional DNA are required, again greatly in excess of the 1,000 bit FSCI threshold.
r --> The origin of man, of the rational mind, and of the moralising conscience are particularly important cases. [[Cf. A typical summary of the evolutionary materialist view here. One critique is here.)] For instance, we may consider the challenge to account, e.g., for the difference in DNA from anatomically fairly similar animals such as chimps – 2 – 5% or so of 3 – 4 billion base pairs (as is often claimed) is tens of millions of bases – as this is greatly in excess of 1,000 bits.

s -->   But more than that, if all phenomena in the cosmos are shaped and controlled in the end by blind chance and necessity, however mediated then the credibility of reasoning irretrievably breaks down.
t --> For, if non-rational chains of cause and effect dominate over logical inference and moral principle, then our behaviour is explained and controlled by forces irrelevant to logic, principle or truth. So even scientific and materialist thoughts have no rational grounds, i.e we are at reduction to absurdity. This is plainly evident in for example Sir Francis Crick's remarks in his The Astonishing Hypothesis, 1994:
. . . that "You", your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll's Alice might have phrased: "You're nothing but a pack of neurons." This hypothesis is so alien to the ideas of most people today that it can truly be called astonishing. [[Cf. dramatisation of unintended potential consequences, here.]

u --> But plainly, if Sir Francis is included, this dramatically undermines his own thought. This is why ID thinker Phillip Johnson responded that Dr Crick should therefore be willing to preface his books: “I, Francis Crick, my opinions and my science, and even the thoughts expressed in this book, consist of nothing more than the behavior of a vast assembly of nerve cells and their associated molecules.” (In short, as Prof Johnson then went on to say: “[[t]he plausibility of materialistic determinism requires that an implicit exception be made for the theorist.” [[In Reason in the Balance, 1995.])

v --> From the days of Plato, it was understood that such materialism also has serious implications for society, as it is inherently amoral: if thoughts are determined by chance and necessity, and the issue is the dominance of the “fittest,” then that all too easily becomes the premise for the ruthless and powerful to try to dominate all others. As Plato's Athenian Stranger observed in The Laws, Book X:

Ath. . . . [[The avant garde philosophers and poets, c. 360 BC] say that fire and water, and earth and air [[i.e the classical "material" elements of the cosmos], all exist by nature and chance, and none of them by art, and that as to the bodies which come next in order-earth, and sun, and moon, and stars-they have been created by means of these absolutely inanimate existences. The elements are severally moved by chance and some inherent force according to certain affinities among them-of hot with cold, or of dry with moist, or of soft with hard, and according to all the other accidental admixtures of opposites which have been formed by necessity. After this fashion and in this manner the whole heaven has been created, and all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only. [[In short, evolutionary materialism premised on chance plus necessity acting without intelligent guidance on primordial matter is hardly a new or a primarily "scientific" view! Notice also, the trichotomy of causal factors:  (a) chance/accident, (b) mechanical necessity of nature, (c) art or intelligent design and direction.] . . . .

[[Thus, they hold that t]he Gods exist not by nature, but by art, and by the laws of states, which are different in different places, according to the agreement of those who make them; and that the honourable is one thing by nature and another thing by law, and that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [[Relativism, too, is not new; complete with its radical amorality rooted in a worldview that has no foundational IS that can ground OUGHT. (Cf. here for Locke's views and sources on a very different base for grounding liberty as opposed to license and resulting anarchistic "every man does what is right in his own eyes" chaos leading to tyranny. )] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might [[ Evolutionary materialism leads to the promotion of amorality], and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions [[Evolutionary materialism-motivated amorality "naturally" leads to continual contentions and power struggles; cf. dramatisation here],  these philosophers inviting them to lead a true life according to nature, that is, to live in real dominion over others [[such amoral factions, if they gain power, "naturally" tend towards ruthless tyranny], and not in legal subjection to them. 
 
w --> This ancient analysis has been echoed tellingly in our day by historian of science (with a special focus on evolutionary biology and population genetics) Prof. William Provine of Cornell University, in his well-known 1998 Darwin Day keynote address at the University of Tennessee, as he then went on to try to make the best case he could for an ethics of naturalism:

Naturalistic evolution has clear consequences that Charles Darwin understood perfectly. 1) No gods worth having exist; 2) no life after death exists; 3) no ultimate foundation for ethics exists; 4) no ultimate meaning in life exists; and 5) human free will is nonexistent . . . . 
 
The first 4 implications are so obvious to modern naturalistic evolutionists that I will spend little time defending them. Human free will, however, is another matter. Even evolutionists have trouble swallowing that implication. I will argue that humans are locally determined systems that make choices. They have, however, no free will . . . . Without free will, justification for revenge disappears and rehabilitation is the main job of judicial systems and prisons. [[NB: As C. S Lewis warned, in the end, this means: reprogramming through new conditioning determined by the power groups controlling the society and its prisons.] We will all live in a better society when the myth of free will is dispelled . . . . 
 
How can we have meaning in life? When we die we are really dead; nothing of us survives. 
 
Natural selection is a process leading every species almost certainly to extinction . . . Nothing could be more uncaring than the entire process of organic evolution. Life has been on earth for about 3.6 billion years. In less that one billion more years our sun will turn into a red giant. All life on earth will be burnt to a crisp. Other cosmic processes absolutely guarantee the extinction of all life anywhere in the universe. When all life is extinguished, no memory whatsoever will be left that life ever existed.

Yet our lives are filled with meaning. Proximate meaning is more important than ultimate. Even if we die, we can have deeply [[subjectively and culturally] meaningful lives . . . .

[[Evolution: Free Will and Punishment and Meaning in Life, Second Annual Darwin Day Celebration Keynote Address, University of Tennessee, Knoxville, February 12, 1998 (abstract).] 
 
. . . These remarks find a striking parallel in Dawkins' words in a 1995 Scientific American article:
Nature is not cruel, only pitilessly indifferent. This lesson is one of the hardest for humans to learn. We cannot accept that things might be neither good nor evil, neither cruel nor kind, but simply callous: indifferent to all suffering, lacking all purpose.
We humans have purpose on the brain. We find it difficult to look at anything without wondering what it is “for,” what the motive for it or the purpose behind it might be. The desire to see purpose everywhere is natural in an animal that lives surrounded by machines, works of art, tools and other designed artifacts – an animal whose waking thoughts are dominated by its own goals and aims . . . .
Somewhere between windscreen wipers and tin openers on the one hand, and rocks and the universe on the other, lie living creatures. Living bodies and their organs are objects that, unlike rocks, seem to have purpose written all over them . . . . The true process that has endowed wings, eyes, beaks, nesting instincts and everything else about life with the strong illusion of purposeful design is now well understood.
It is Darwinian natural selection . . . . The true utility function of life, that which is being maximized in the natural world, is DNA survival. But DNA is not floating free; it is locked up in living bodies, and it has to make the most of the levers of power at its disposal. Genetic sequences that find themselves in cheetah bodies maximize their survival by causing those bodies to kill gazelles. Sequences that find themselves in gazelle bodies increase their chance of survival by promoting opposite ends. But the same utility function-the survival of DNA-explains the “purpose” of both the cheetah [--> i.e. predator]  and the gazelle [--> i.e. prey] . . . .
The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are being slowly devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst and disease. It must be so. If there is ever a time of plenty, this very fact will automatically lead to an increase in population until the natural state of starvation and misery is restored.
In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won’t find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil and no good, nothing but pitiless indifference . . . . DNA neither cares nor knows. DNA just is. And we dance to its music. [[ “God’s Utility Function,” Sci. Am. Aug 1995, pp. 80 - 85.]
[[NB: This article raises the issue of the problem of evil, here emphasising the problem of natural evil; probably the strongest argument in the atheists' arsenal, but one that only works by implicitly assuming that good and evil, thus moral obligation, are real; while ducking the implication that the only valid worldview in a world in which OUGHT is real, is one that has a foundational IS that adequately grounds ought. And materialism -- scientific atheism today, has no such is. So, the objecting atheist actually has no grounds to stand on to make the argument; his argument, in the end is self-defeating, and so the proper response is to insist that such an atheist face that issue before proceeding further. (Cf here for a preliminary discussion of the problem of evil from a Christian perspective.)]
 x --> In critiquing Provine's remarks from a Judaeo-Christian perspective, Kyle Butt brings out a significant implication:

Provine’s . . . [[address] centered on his fifth statement regarding human free will. Prior to delving into the “meat” of his message, however, he noted: “The first 4 implications are so obvious to modern naturalistic evolutionists that I will spend little time defending them” (Provine, 1998).

It is clear then, from Provine’s comments, that he believes naturalistic evolution has no way to produce an “ultimate foundation for ethics.” And it is equally as clear that this sentiment was so apparent to “modern naturalistic evolutionists” that Mr. Provine did not feel it even needed to be defended . . . . [[However, i]f it is true that naturalistic evolution cannot provide an ultimate foundation for determining the difference between actions that are right and ones that are wrong, then the door is wide open for subjective speculation about all human behavior. [[Rape and Evolution, Apologetics Press, 2005.]

y --> Thus, whether or not we are inclined to accept either Evolutionary Materialism or Judaeo- Christian theism or another worldview, issues closely tied to origins science matter, truly matter, and lie at the core of many of the complex issues we face in our civilisation today, in our communities, institutions, families and lives. 
 
z --> They are so important that if we are to think and live soundly, we have to think seriously about them and about their consequences. So it is appropriate, indeed necessary, to take time to look at the issues in greater detail – the purpose of this course.
__________________________

REMARKS: Above, we have seen how important origins science is, and why all educated citizens need to know about it, its findings, its focal issues, its methods and their limitations. We have further identified the challenge of evolutionary materialist scientism, and how in recent decades, it has unfortunately distorted the fabric of science, hindering the true goal of science as an unfettered (but ethically and intellectually responsible) pursuit of the truth about our world based on empirical evidence, reasoned argument and discussion among the informed. This makes it vitally important that we now undertake a critical review course. The Independent Origins Science Education course that is further developed below is a first attempt at such a review. We trust that yo will find it helpful.
___________________________
FOR DISCUSSION:
1] Critically assess prof. Lewontin's view on the nature of science, its link to materialism, its contrast with supernaturalism and his inference that science is “the only begetter of truth.” Compare the thoughts of the US NAS, Johnson, Plato, Provine and Butt.
2] Re-examine the contrasting facial profiles in Fig. I.1. Compare the remarks by the US National Academy of Science on how science can only effectively explain by “natural causes.” Is the only properly scientific contrast to “natural” the “supernatural,” or would “artificial” be a scientifically relevant contrast?
3] Is intelligence an empirical fact of our world, and does it tend to leave behind it empirical signs that may be examined objectively and scientifically? Why, or why not?
4] Is professor Johnson correct to assert:
. . . if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose."

5] In light of the above, assess how worldview starting points can affect scientific conclusions on origins; given that -- since we were not there -- we do not have direct observational evidence. How, then, should we go about studying origins in a balanced, scientifically informed way?