Wednesday, June 9, 2010

Significance of Origins Science ideas and issues

FOCUS: Before dealing with specific origins science topics, we need to ask and answer questions on what are the main possible options, and on how we can evaluate claims to reliable "scientific"  knowledge of the deep past of origins.  The link between the explanatory filter and the generic scientific method also needs to be examined.
___________________

TOPICS:

--> The origins science challenge


(b) Challenges and pivotal issues

--> The explanatory filter

--> The von Neumann self-replicator
 
(c) Complexity, specific function, irreducibility and Islands of Function


(d) The Dawkins Mt Improbable & infinite regress objections, corrected 


--> Averick's reply to the infinite regress of designers

 --> The significance of coded, symbolic information

(e) Towards a way forward

NEXT:
Cosmology and timelines of the world



























Origins Science is the scientifically informed study of our “roots,” from hydrogen to humans. Thus, as many scholars suggest, it helps shape our view of our world and of ourselves in our world, which makes it very important.
Currently, it faces several key issues, such as: (a) the challenge to preserve science as an objective search for the truth about our world in light of evidence; (b) debates over religion, science, creationism, policy and "theocracy" vs. radical secularisation; (c) questions over the scientific status of intelligence as a causal factor; (d) whether there are reliable signs of intelligence that allow us to credibly and scientifically infer to artificial or intelligent cause.

A good place to begin, then, is by clarifying what scientists generally try to do, and how they go about it as they investigate the present world and its roots in the past:

Science The investigation of natural phenomena through observation, theoretical explanation, and experimentation, or the knowledge produced by such investigation. [[American Heritage Science Dictionary, 2005]

Scientific methods are the principles and processes of discovery and testing scientists use, “generally involving the observation of phenomena, the formulation of a hypothesis concerning the phenomena, experimentation to demonstrate the truth or falseness of the hypothesis, and a conclusion that validates or modifies the hypothesis.” [[American Heritage Dictionary.] 
 
Operational sciences such as physics or chemistry study the current workings and patterns of the world as a going concern, in light of repeatable observations and measurements.  [[IOSE]
 
Origins Sciences seek to reconstruct the origins of our world  in light of currently observed evidence, known patterns and forces of nature (i.e. laws of nature) and scientific principles of investigation.  [[IOSE]
 
Key limitation:As Feyerabend (cf. excerpts here) and other radical philosophers of science point out, there is no set of specific methods and tools that are used across the generally accepted list of sciences, and/or that are so unique to scientific investigation that we may use them to mark a sharp defining line between science and non-science. Also, while observations can sometimes overthrow a theory or what was formerly believed to be a fact, scientific methods cannot guarantee that we will discover and demonstrate the truth about the world at any given time. (That is why science is open-ended and progressive.)
It is thus no surprise to see that at any given time, especially where we study a remote and unobserved past, there will be significant debates. Such debates have deep historical roots:
(a) Root issues and options
People have always wondered about our origins and roots, and have always taken various alternative world-views. For instance, we can see an early form of the creation viewpoint in Job 38:

1 . . . the LORD answered Job out of the storm. He said:
 2 "Who is this that darkens my counsel
       with words without knowledge?
  3 Brace yourself like a man;
       I will question you,
       and you shall answer me.
  4 "Where were you when I laid the earth's foundation?
       Tell me, if you understand
.
 5 Who marked off its dimensions? Surely you know!
       Who stretched a measuring line across it?
  6 On what were its footings set,
       or who laid its cornerstone-
  7 while the morning stars sang together
       and all the angels shouted for joy? . . .


By contrast, Lucretius' poem On The Nature of Things, Book I, Chapter 4 and Bk I, Ch 5 more than anticipates today's common, reductive, materialistic naturalism:

[[Ch 4:]  . . . All nature, then, as self-sustained, consists
Of twain of things: of bodies and of void

In which they’re set, and where they’re moved around.


For common instinct of our race declares
That body of itself exists: unless
This primal faith, deep-founded, fail us not,

 Naught will there be whereunto to appeal
On things occult when seeking aught to prove
By reasonings of mind . . . .


Again,
whate’er exists, as of itself,
Must either act or suffer action on it,
Or else be that wherein things move and be:
 
Naught, saving body, acts, is acted on; 
Naught but the inane [[i.e. void] can furnish room.
 


And thus, Beside the inane and bodies, is no third
Nature amid the number of all things . . . 

 
[[Ch 5:] Bodies, again,
Are partly primal germs of things, and partly
Unions deriving from the primal germs.

And those which are the primal germs of things
No power can quench; for in the end they conquer
By their own solidness
. . . .

The third commonplace view, eternally cyclical pantheism, has historically been championed by some Hindu thinkers. R. K. Lahiri, giving a modern summary, first cites:

That is Whole, This is Whole ,
From the Whole the Whole becomes manifest.
From the Whole, when the Whole is negated,
What remains is again the Whole
–  Isa Upanishad 

 
He then explains how such an all-is-one pantheistic world could give rise to the diverse cosmos we experience:
Rg Veda at some places describes Akshar Purush and Hiranyagarbha (golden Embryo) as the cause of universe. It speaks of the indestructible Purush as vibrating energy from which all life is generated. In 10-12-6-, it speaks of Brahman or Cosmic Egg from which all creation came first. It also describes Viswakarma as creator, the first to come from Brahma. He is the architect of gods.
People generally are confused to understand the different appearing views on cosmos as stated in the scriptures and are led to believe that the views are contrary, whereas they are not at all contradictory, rather they really harmonize. Though there may be some points of logical disagreements, such systems ultimately lead us to the same end although their appeal has different ways. [[cont'd]
. . . . The universe runs in a cyclic order.
1- First there is a period of formation
2- There is a period when it is sustained and
3- The period of destruction
4-A void before the formation of new universe.
During this period, space particles subsist and it is from this, all the matter within the new world is found. This is the fundamental cause of entire physical world. The creation has neither a beginning, nor an end . . . Prior to formation, particular universe comes in a state of emptiness where all its material elements exist in form of potential as space particles. This has been the eternal process. So universe has neither a beginning nor an end. Universe is eternally in Pravah (flow), following each other in alternative succession. It is clear that a single definite beginning is highly problematic and unscientific. [[The Hindu View on Cosmogony ]

Now, none of the views above are scientific, but rather philosophical (and/or theological) statements. 

For example, Lucretius' Atomism is a philosophical assertion that on observing matter and space in action, we may infer that this is all that is. (NB: He arguably begs the question of the observing and acting mind! For, to try to reduce minds to “bodies” in brains in motion under ultimately physical forces and chance circumstances rather than logical inference is self-referential and so undercuts the grounds for trust in our ability to reason.) And, as Lewontin reminded us above, too often it is philosophical assumptions -- whether explicit and humbly provisional or simply presumed to be "obvious" and "unquestionable" -- that control our “scientific” conclusions.

It is therefore reasonable to ask that worldview stories fit the observed facts, that they should be as coherent as possible, and that they should explain without being either simplistic or full of patches that only serve to stop up holes in the explanations they offer. For instance, Lahiri's objection that “a single definite beginning is highly problematic and unscientific,” reminds us that the evidence pointing to a beginning of our observed cosmos and to problems with onward attempts to turn this into a part of an eternal cyclic order, point to a beginner, i.e. raise the question of definite creation.

 When we move on to the scientific aspects, philosophy -- especially logic and epistemology (the study of knowledge) -- can help us understand the strengths and weaknesses of scientific knowledge claims; especially when we address matters of origins. 

For instance, in a famous 1973 Open University Lecture on Science and Pseudoscience, Hungarian philosopher of science Imre Lakatos observed that:
I claim that the typical descriptive unit of great scientific achievements is not an isolated hypothesis but rather a research programme . . .  Newtonian science, for instance, is not simply a set of four conjectures - the three laws of mechanics and the law of gravitation. These four laws constitute only the 'hard core' of the Newtonian programme. But this hard core is tenaciously protected from refutation by a vast 'protective belt' of auxiliary hypotheses. And, even more importantly, the research programme also has a 'heuristic', that is, a powerful problem-solving machinery, which, with the help of sophisticated mathematical techniques, digests anomalies and even turns them into positive evidence . . .  Now, Newton's theory of gravitation, Einstein's relativity theory, quantum mechanics, Marxism, Freudism, are all research programmes, each with a characteristic hard core stubbornly defended, each with its more flexible protective belt and each with its elaborate problem-solving machinery. Each of them, at any stage of its development, has unsolved problems and undigested anomalies. All theories, in this sense, are born refuted and die refuted . . . . in a progressive research programme, theory leads to the discovery of hitherto unknown novel facts.  In degenerating programmes, however, theories are fabricated only in order to accommodate known facts. [[Highlight added.]
In short, at the heart of a given scientific research programme is a core that often embeds worldview-level commitments and views about the right way to do science. 

That core is protected by a flexible armour-belt of auxiliary plausible conjectures that help it "solve" problems that seem to be significant to practitioners in the paradigm.  

But equally, the auxiliary framework typically deflects the impact of points where the theory is not working so well just now, perhaps even "solving" problems by after-the-fact explaining away on auxiliary hypotheses. 

(NB: This also means that "falsifiability" is not a  particularly good criterion for testing whether or not a given research programme is or is not "scientific," and -- since, for instance, a formerly degenerative paradigm can sometimes have a breakthrough success and suddenly take back the lead -- it also should give us pause before imposing overly hard and fast rules on what is or is not "science.")  

If a research programme  becomes ever more characterised by after-the-fact defensive deflection of problems, however, it is plainly deteriorating.  

Progressive paradigms, instead, consistently anticipate and correctly predict new and otherwise unexpected observations; even though at any given time, there probably will always be unresolved points where the body of observed facts and the theory do not currently line up. 

Moreover, at no point can such progressiveness and success actually prove the truth of the theory, whether in its auxiliary conjectures or in its core commitments.  This is because, as a matter of basic inductive logic, we cannot really ever prove that a scientific theory (let's call it "T") is true in any final sense. 

Q: Why is that so?

A: While we may make some observations (O1) and recognise that if T (a best current explanation or model or theory) were so, it would logically imply [[symbol: "=>"] -- i.e. "explain" -- O1 and predict further observations P2, which may well happen, but that is not enough to prove T true.  For,  something strange happens with our logic: 
T => O1, on inference to best explanation.
T => P2, on prediction, where O2 may be actually true.
But, even if (O1 + P2) are true, that does not mean T is true.
_______________________________________
So: O1 and P2 may empirically support T, but cannot establish it as true
 To see what has gone wrong, let T = "Tom is a Cat," and (O1 + P2) = "Tom is an animal." 

Now, if Tom is a cat, he would certainly be an animal. But, surprise: that Tom is an animal does not mean he must be a cat; he could, say, be a pig instead. Being a cat is sufficient to be an animal, but not all animals are cats.  

That is why though a theory can be empirically reliable in the sense of correctly predicting observations, the observations do not prove the theory true beyond correction; that is, scientific theories are explanations, not logical demonstrations shown to be true beyond reasonable dispute.   

To see this may requires a little more explanation, on Pierce's logic of abduction as applied to science. So, in steps of thought:
. . . Charles Sanders Peirce’s Logic of Abduction (also cf. here and here or even here) concept rapidly shows why [scientific explanations must always be open-ended and subjected to further tests]:
1.      Observations of the natural (or human) world produce facts, F1, F2, . . . Fn; some of which may seem strange, contradictory or puzzling.


2.      However, if a proposed law, model or theory, E, is assumed, the facts follow as a matter of course: E is a scientific explanation of F1, F2, . . . Fn.
[[This step is ABDUCTION. E explains the facts, and the facts provide empirical support for E. In general, though, many E's are possible for a given situation. So, we then use pruning rules, e.g. Occam's Razor: prefer the simplest hypothesis consistent with the material facts. But in the end, the goal/value is that we should aim to select/infer the best (current) explanation, by using comparative tests derived from the three key worldview tests: explanatory scope, coherence and power.]

3.      E may also predict further (sometimes surprising) observations, P1, P2, . . . Pm. This would be done through deducing implications for as yet unobserved situations. [[This step, obviously, uses logical DEDUCTION.]


4.      If these predictions are tested and are in fact observed, E is confirmed, and may eventually be accepted by the Scientific community as a generally applicable, empirically reliable law or theory.
[[This step is one of logical INDUCTION, inferring from particular instances to -- in the typical case, more general -- conclusions that the instances make “more probable.”] 
5.      In many cases, some longstanding or newly discovered observations may defy explanation, and sometimes this triggers a crisis that may lead to a scientific revolution; similar to Thomas Kuhn’s paradigm shift.
6.      Thus, scientific knowledge claims are in principle always provisional: subject to correction/change in light of new evidence and analysis.
7.      But also, even when observations are accurately covered/predicted by the explanation, the logic involved has limitations: E => O, the set of current and predicted observations [[2], does not entail that if O is seen then E follows: “If Tom is a cat then Tom is an animal” does not entail “Tom is an animal, so he must be a cat.”[[3]
__________

F/N 2: That is, O = { F1, F2, . . . Fn, P1, P2, . . . Pm}.

F/N 3: 
This is a fallacy, Affirming the Consequent, and is based on confusing implication with equivalence; i.e. double implication: P <=> Q means (1) P => Q AND (2) Q => P, but (1) obviously does not always imply (2) as the Tom the cat example shows.
In short, scientific knowledge claims, at best, are provisional; though they are usually pretty well tested and have across time helped us make considerable technological, health and economic progress.
The simplest way to put this is to note that scientific facts in the normal sense, are credible observations; and "facts" should be reserved for this meaning -- if we did not see or measure it (or note it from credible record), it is not an observation. Credible observations are subject to correction, but absent such correction can be taken as likely to be real. But, once we seek to explain the observations on a law or a theory, we have left the realm of fact and moved to that of models, models which may indeed be empirically reliable so far and may be good at making predictions, but that "so far" is a vital limitations. Scientific theories are not to be taken as unquestionably true. And this becomes even more the case when we move to theoretical reconstructions of the remote past beyond record.

So, we must always be open to correction of even well supported theories. 

(For example, from about 1680 to 1880, Newton's theories of motion were the best supported theories to that time in science; but then once the very small and the very fast were being studied, suddenly the predictions began to fail. From that we got Quantum Mechanics and Relativity Theory.)

It may help to represent some of this in a diagram, where bodies of fact F1, FR2, . .  Fn are explained by a best current explanation or model or theory E/M/T, and are then correlated with a body of accepted theories, BOAT:

 

When we turn to origins sciences, as already noted in brief, we can see another problem.  For, just as Job 38:2 – 4 points out, we were not “there” to see the facts directly. Nor, do we have generally accepted record form those who indisputably were there.

Plainly, this is a very good and deep challenge to the project of origins science.

The best answer we can give is that:
(i) once we can establish a knowledge of empirically reliable causal patterns and their traces in the present,

(ii) where also we can observe traces of the past in the present, and

(iii) where finally we can suggest a credible set of initial circumstances and change processes based on known causal patterns that would give rise to sufficiently similar traces,
____________________________________________ 
 (iv) then, we may scientifically infer on best explanation, that the suggested circumstances and dynamics are a credible — albeit inevitably provisional — origins narrative.
One thing that we have no right to do, is to claim that such an inferential reconstruction is a fact beyond reasonable dispute or doubt. (Sadly, it is necessary to note this, as there is a tendency to over-claim the factual basis for theories of origins.) 

As a result, we do not independently and directly know the indisputably true facts or even dates for what happened in the remote past. 

So also, while we can build models that reconstruct what we may think the past was more or less like, we cannot make direct, theory-independent observations that give us indisputable access to the true facts of the remote past

Therefore,  we cannot scientifically know or explain the true facts on our origins beyond reasonable dispute or possibility of correction.


(b) Challenges and pivotal issues

Given such deep worldview roots in origins science issues and schools of thought, it is not surprising to see that this area of scientific study seems to be inevitably quite controversial. (Cf. here Joseph Kuhn's recent short (7 pp.) critique of Darwinism, to get a flavour for this, in a review article on science education in the Proceedings of the Baylor University Medical Center, Vol 25 issue 1, pp. 41 - 47.)

In the resulting debates the pivotal scientific issue is whether there are reliable empirical signs that point beyond reasonable doubt to intelligent action, even when we did not directly observe the intelligence at work, and have no generally acceptable historical record.  Thus, the uniformity principle that we explain the deep past by known causes producing like effects in the present now surfaces as a key to scientific explanations of the past:



(NB: Now, if we either directly saw the designer at work ourselves or have generally accepted records, the question of making a provisional scientific inference to what happened would not arise.  Just so, the occasionally seen demand for independent "proof" of the existence of a designer in the deep past before being willing to accept inference from signs of design to designer, is thus a selectively hyperskeptical begging of the question, and it falls apart in inconsistency. For, if one is  willing to accept any number of inferences from present signs and patterns that point to chance and mechanical necessity at work in the past,  but refuses to consider inferring on similarly empirically reliable signs and patterns to directed contingency in the far past of origins, it raises the suspicion that one is committed to Lewontin's  a priori materialism, or is afraid of where evidence of design in the deep past might point. Certainly, the notion promoted by Lewontin and others, that: since "God" is a possible candidate designer that would overturn science in a cloud of arbitrary, chaotic miracles, is decisively undercut by the known views of the overwhelming majority of founders of modern science -- men like Newton, Kepler, Boyle, and Pascal, or even Linnaeus, Faraday, Maxwell, Kelvin, Mendel and Pasteur. For, they were theists who saw God as the Author of the ordering and sustaining Laws of Creation [[hence that odd little term: Laws of Nature ], and so saw themselves  as thinking God's thoughts after him. In addition, theologians tell us that, for a miracle to stand out as as a sign pointing beyond the usual course of creation, it requires that there be a "usual course," i.e. an intelligible, regular, recognisable pattern of nature -- the very foundation of science. The accusation that theism injects irrational chaos into our view of the world and so is inconsistent with successful science is a slander that exploits our ignorance of the history of science. )

So, we must now turn to a form of the "famous" explanatory filter proposed by design theorists in recent years:


Fig. G.1: Can we reliably infer from sign to intelligent cause? (SOURCE: IOSE)

Design theorists believe that it is possible to develop a soundly scientific version of the common-sensical way we recognise intelligence from apparent artifacts. For, just as:

(i) we recognise natural law from the presence of a natural regularity (e.g. an unsupported heavy object reliably falls), and 
 
(ii) just as we see chance at work when we see an apparently random statistical pattern (e.g. which face of a dropped fair die will be uppermost), also:

(iii) when we see complex, specified information (CSI) -- especially functionally specific, complex information (FSCI) -- we may conclude design (e.g. the functionally specific ASCII text of this section).

Obviously, such an inference is often correct. 

Indeed, just using the Internet as an example, there are many millions of test cases where such FSCI is indeed a good sign of intelligence. Further to this, where we directly/ independently know the source, FSCI and (more broadly) CSI are indeed reliable signs of intelligence. But, is this generally so, especially where on origins science issues it might just “allow a Divine Foot in the door”?
The simplest reply is that when a hypothesis stands repeated empirical test, it is by that fact empirically reliable. But, chance can in principle mimic any outcome. E.g. the text of this entire work could possibly be produced by a million monkeys banging away at keyboards at random for “long enough.”
So there seems to be a need for some further measure of assurance. For this, John von Neumann's self-replicator (1948 – 49) is a good focal case to study. Ralph Merkle gives a good motivating context:
[[T]he costs involved in the exploration of the galaxy using self replicating probes would be almost exclusively the design and initial manufacturing costs. Subsequent manufacturing costs would then drop dramatically . . . . A device able to make copies of itself but unable to make anything else would not be very valuable. Von Neumann's proposals centered around the combination of a Universal Constructor, which could make anything it was directed to make, and a Universal Computer, which could compute anything it was directed to compute. This combination provides immense value, for it can be re- programmed to make any one of a wide range of things . . . [[Self Replicating Systems and Molecular Manufacturing, Xerox PARC, 1992. (Emphases added.)]


Fig. G.2: A schematic, 3-D/“kinematic” von Neumann-style self-replicating machine. [[NB: von Neumann viewed self-replication as a special case of universal construction; “mak[[ing] anything” under programmed control.] (Adapted, Tempesti. NASA's illustration may be viewed here. and the Cairns-Smith model here.)

Fig. G.2 (b): Mignea's schematic of the requisites of kinematic self-replication, showing duplication and arrangement then separation into daughter automata. This requires stored algorithmic procedures, descriptions sufficient to construct components, means to execute instructions, materials handling, controlled energy flows, wastes disposal and more. (Source: Mignea, 2012, slide show; fair use. Presentation speech is here.)
Von Neumann's thought on a kinematic -- physically acting (not a mere computer simulation) -- self replicator that has the key property of additionality [[i.e it is capable of doing something of interest, AND is able to replicate itself on a stored, code description and an implementing facility] may be summarised in brief, as De Freitas and Merkle quite nicely do for us:
 Von Neumann [[3] concluded that the following characteristics and capabilities were sufficient for machines to replicate without degeneracy of complexity:
o Logical universality – the ability to function as a general-purpose computing machine able to simulate a universal Turing machine (an abstract representation of a computing device, which itself is able to simulate any other Turing machine) [[310, 311]. This was deemed necessary because a replicating machine must be able to read instructions to carry out complex computations.

o Construction capability – to self-replicate, a machine must be capable of manipulating information, energy, and materials of the same sort of which it itself is composed.

o Constructional universality – In parallel to logical universality, constructional universality implies the ability to manufacture any of the finitely sized machines which can be formed from specific kinds of parts, given a finite number of different kinds of parts but an indefinitely large supply of parts of each kind.
Self-replication follows immediately from the above, since the universal constructor* must be constructible from the set of manufacturable parts. If the original machine is made of these parts, and it is a constructible machine, and the universal constructor is given a description of itself, it ought to be able to make more copies of itself . . . .

Von Neumann thus hit upon a deceptively simple architecture for machine replication [[3]. The machine would have four parts – (1) a constructor “A” that can build a machine “X” when fed explicit blueprints of that machine; (2) a blueprint copier “B”; (3) a controller “C” that controls the actions of the constructor and the copier, actuating them alternately; and finally (4) a set of blueprints f(A + B + C) explicitly describing how to build a constructor, a controller, and a copier. The entire replicator may therefore be described as (A + B + C) + f(A + B + C) . . . .

Von Neumann [[3] also pointed out that if we let X = (A + B + C + D) where D is any arbitrary automaton, then (A + B + C) + f(A + B + C + D) produces (A + B + C + D) + f(A + B + C + D), and “our constructing automaton is now of such a nature that in its normal operation it produces another object D as well as making a copy of itself.” In other words, it can create useful non-self products in addition to replicating itself and has become a productive general-purpose manufacturing system. Alternatively, it has the potential to evolve by incorporating new features into itself. Observers [[314-316] have noted that von Neumann’s early schema was later confirmed by subsequent research on the molecular biology of cellular reproduction . . . 

[[
Robert A. Freitas Jr. and Ralph C. Merkle, Kinematic Self-Replicating Machines, 2004; Section 2.1.1 (NB: the book is online, here.)]
It is worth noting on an apparent paradox as well, from Section 2.1.2:
Drexler [[208] pithily observed: “It may seem somehow paradoxical that a machine can contain all the instructions needed to make a copy of itself, including those selfsame complex instructions, but this is easily resolved. In the simplest approach, the machine reads the instructions twice: first as commands to be obeyed, and then as data to be copied. Adding more data does not increase the complexity of the data-copying process, hence the set of instructions can be made as complex as is necessary to specify the rest of the system. By the same token, the instructions transmitted in a replication cycle can specify the construction of an indefinitely large number of other artifacts.”
Now therefore, following von Neumann generally, such a machine capable of doing something of interest with an additional self-replicating facility uses . . .

(i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, a Turing-type “universal computer”] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility; 
 
(ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with 
 
(iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling: 
 
(iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by 
 
(v) either: 
 
(1) a pre-existing reservoir of required parts and energy sources, or 
 
(2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.

Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor.

That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [[Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]. 

This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources:


(c) Complexity, specific function, irreducibility and Islands of Function:

Immediately as irreducible complexity walks in the door, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurations

 That is because such IC systems have cores of well matched necessary parts that must be all present and must all be properly organised on the correct "wiring diagram," to effect required function. Citing key definitions, courtesy ISCID's online encyclopedia:


Michael Behe's Original Definition [[of IC]:
A single system composed of several well-matched, interacting parts that contribute to the basic function of the system, wherein the removal of any one of the parts causes the system to effectively cease functioning. (Darwin's Black Box, 39)

William Dembski's Enhanced Definition:
A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, nonarbitrarily individuated parts such that each part in the set is indispensable to maintaining the system's basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. (No Free Lunch, 285)

This requirement for a core set of jointly necessary, properly arranged parts to effect function of a complex object, sharply constricts the acceptable set of configurations of the parts, relative to the vastly larger number of possible configurations. In effect, if we pass a tornado through a robotics junkyard somewhere near Silicon Valley, it will indubitably arrange the parts in many possible ways. But very very few of these possible ways will implement a functional  vNSR such as we saw above; by far and away, the overwhelmingly likely outcome would be a mess that has no function. In short, outside rather special, functionally specific -- thus, isolated -- information-rich hot (or, "target") zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time. 

That is, the components must match, must be put together in accordance with what well-known origin of life researcher J S Wicken has called a "wiring diagram," and must be properly hooked up. So, we have quite naturally sharply restricted the set of possible components and how they must be arranged, if they are to be effectively functionally organised. Some variation is possible but as an easily observationally confirmed rule, only a relatively few of the possible arrangements of parts will form a functional whole. Islands of functional configurations in a vastly larger sea of non-functional possibilities, is a natural result of what is needed for a complex, multi-part entity to work, not a dubious or question-begging assumption.


As Angus Mengue highlighted for the bacterial flagellum, this extends to complex biological systems, especially where there is an irreducible core number of necessary parts:

For a working [[bacterial] flagellum to be built by exaptation [[i.e. taking over and stringing together existing parts that have different functions or even no function at the moment], the five following conditions would all have to be met:
C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.
C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.
C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.
C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.
C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.
( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV. Emphases added.)

So, as C1 - 5 exemplify, we can easily see how, once the set of possible configurations is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.

Further, let us consider a tape of 1,000 bits (= 125 bytes), which is plainly grossly insufficient to specify the parts and instructions for a von Neumann replicator. 

However, the number of possible configurations of 1,000 bits is 1.07 * 10^301, more than ten times the square of the 10^150 states the 10^80 atoms of our observed universe would take up across a reasonable estimate of its lifespan. So, viewing our observed universe as a search device, it would scan less than 1 in 10^150th part of even so “small” a configuration space. That is, it would not carry out a credible “search” for islands of function, making such islands sufficiently isolated to be beyond the reasonable reach of a blind search.

But also, this is in a context. 
 
For, while our technology has not been able as yet to create such a three-dimensional, irreducibly complex real world self-replicator [[as opposed to computer cellular automaton models], in fact such devices are common: the living cell. On the role played by mRNA, tRNA and Ribosomes in such cells, Merkle notes:

We can view a ribosome as a degenerate case of [[a Drexler] assembler [[i.e. a molecular scale von Neumann-style replicator]. The ribosome is present in essentially all living systems . . . It is programmable, in the sense that it reads input from a strand of messenger RNA (mRNA) which encodes the protein to be built. Its "positional device" can grasp and hold an amino acid in a fixed position (more accurately, the mRNA in the ribosome selects a specific transfer RNA, which in its turn was bound to a specific amino acid by a specific enzyme). The one operation available in the "well defined set of chemical reactions" is the ability to make a peptide bond [[NB: This works by successively “nudging” the amino acid-armed tip of the codon- matched tRNA in the ribosome's A site to couple to the amino acid tip of the preceding tRNA (now in the P site) and ratcheting the mRNA forward; thus elongating the protein's amino acid chain step by step] . . . . [[T]he ribosome functions correctly only in a specific kind of environment. There must be energy provided in the form of ATP; there must be information provided in the form of strands of mRNA; there must be compounds such as amino acids; etc. etc. If the ribosome is removed from this environment it ceases to function.

[[Self Replicating Systems and Molecular Manufacturing, Xerox PARC, 1992. (Parentheses, emphases and links added. Notice as well how the concept of functionally specific complex information naturally emerges from Merkle's discussion.)]


That is: in living cells DNA strands of typically 100,000 to 4,000,000,000 four-state digital -- i.e. discrete as opposed to continuous state -- elements provide a “tape” that is transcribed and read in segments as required, by molecular machines in the cell and is also used to carry out metabolic activity: creating and organising the proteins and other molecules of life, which carry out its functions. And the configuration space specified by just 100,000 four-state elements has 9.98 * 10^60,205 possible states. (Also, the whole observed universe, across its thermodynamically credible lifespan can only have about 10^150 atomic level, Planck-time states. )

Thus, we see a needle-in-a-haystack challenge, which becomes an insuperable task even with so unrealistically small a blueprint as 1,000 bits. 

So, it is at least plausible that cell based life is an artifact of design -- a conclusion that is very unwelcome in the Lewontinian materialist camp. 


(d) The Dawkins Mt Improbable & infinite regress objections, corrected:
 
Now, a common attempted rebuttal or dismissal to such reasoning is to claim that spontaneous, natural chance variation [[e.g. through mutations] and natural selection work together to create new functional biological information, so that chance alone does not have to do it all in one step. In Dawkins' terms, there is an easy, step- by- step path up the back slope of “Mt Improbable.” 

But, this is an error, as quite plainly the claimed source of novel biological information is the variation, not the selection

For, as Darwin himself pointed out in the introduction to Origin, “any being, if it vary however slightly in any manner profitable to itself . . . will have a better chance of surviving, and thus [[will] be naturally selected.”

However, if there is no new function that comes from natural variation – and, for the very first life form, this must include replication capacity itself (with the above requisites as analysed by von Neumann) -- no natural selection of advantageous variation will be possible. That is, CULLING-OUT of inferior varieties based on their relative disadvantage in an environment cannot – thus, does not – explain or cause the ORIGIN of the varieties and of their underlying genetically coded biological information.
 
Another objection championed by Dawkins is the infinite regress of ever more complex inferred designers, so in his view inference to design explains nothing. 

Rabbi  Moshe Averick has a rebuttal that is well worth pondering, however:
The entire plot of the classic film, 2001: A Space Odyssey is based on . . . [[an] obvious principle. At a dramatic moment in the film, when a rectangular monolith is discovered buried on the moon, it is clear to those who discover it (and accepted as absolutely logical and reasonable by everyone watching the movie) that this is unmistakable proof of alien life. After all, a precisely measured monolith couldn't possibly have made itself or "evolved naturally". . . . The human body is an incredible piece of machinery; who put it together? It certainly required a great deal more sophistication to build a human being than to construct a rectangular monolith [[or a Jumbo Jet, or a calculator]  . . . .

As it turns out, Darwinian evolution is not, as the skeptic would have us believe, a testimony to what can emerge from undirected processes; it is a testimony to the unimaginably awesome capabilities and potential contained in the first living cell and its genetic code. A paradigm-shifting insight emerges from all this: Contrary to popular belief, not only is Darwinian evolution not the cause or explanation of the staggering complexity of life on this planet; Darwinian evolution itself is a process which is the result of the staggering complexity of life on this planet . . . All existing life is nothing more than a variation on a theme. All the "organized complexity" of life is a variation on the "organized complexity" of the first living organism . . . .

[[I]f it is statistically improbable that a 747 [[as Sir Fred Hoyle suggested and as prof Dawkins wishes to rebut by his infinite regress of complexities argument] could have originated by chance, then it is an even greater statistical improbability that the designer of the 747 originated by chance. I agree wholeheartedly. Both the 747 and the human creators of the 747 are here not by chance, but by design! . . . .


The philosophical problem that must be addressed is the following: How do we escape from the dilemma of the infinitely regressing series of creators (i.e., whoever created me would have to be at least as complex and sophisticated as I am, and therefore he would also need someone to create him, and so on.)? To state this dilemma in a slightly different way: Since all agree that at one time life did not exist and now it does exist, there must be an actual beginning to the process, it cannot go back infinitely . . . .

Properly presented, the question is as follows:

Any functionally complex and purposefully arranged form of physical matter (i.e. a Boeing 747, a calculator, or a bacterium), must itself have a creator at least as complex as the object in question. How do we (or can we) escape an infinite regression of creators?
That which demands and requires a preceding creator is a complex arrangement of physical matter. With this precise formulation of the question, the answer becomes obvious. At some point in the progression, we are faced with the inescapable conclusion that there must be a creator who is not physical matter at all; a creator who does not need to be created; a creator who is not subject to the limitations of cause and effect. There must be a creator who is the first, who is the beginning of it all. There must be a creator who is outside of the physical universe. A creator who is outside of the physical universe, not existing in time and space, and composed of neither matter nor energy, does not require a preceding creator. There is nothing that came before him. He created time, he does not exist in time; there is no "before". ("What happened before the big bang? The answer is there was no ‘before.’ Time itself began at the big bang." 16 Physicist, Dr. Paul Davies) We are created; along with time, space, matter, and energy. We are subject to the limitations of a time/space bound series of causes and effects. The creator simply is. [[Rabbi Moshe Averick, "Turns out Richard Dawkins' watchmaker has 20/20 vision after all," Aish.com, Feb. 5, 2011. Well worth reading in toto, here.]
 Thus, Dawkins' proposed infinite regress in the end arguably points in a direction he may not have anticipated!

Even, granting him the whole world of Darwinian evolution of life from a universal common ancestral cell. For, as others have pointed out, the gap between a pond full of chemicals and a living bacterium is far wider than that between a bacterium and say an elephant; which, after all, grows from such a single living cell by a process of regulated multiplication, specialisation and co-ordinated development.


Let's therefore refocus on that core living cell: the cell uses coded, discrete state/digital, algorithmic – thus symbolically representative – information.  
 
But, nothing in the direct working of the four fundamental physical forces (strong and weak nuclear, electromagnetic and gravitational) provides a base for the origin of sets of symbols and rules for interpreting and applying them through abstractly representing objects and/or actions on those objects. 

So, some further worldview level issues follow for evolutionary materialism:

a --> If one's worldview is Lewontinian- style a priori, evolutionary materialism, (cf. also Johnson's critique) then one only has access to the undirected fundamental forces of physics, the basic properties of matter, energy and time as causal factors to explain anything and everything.   
 
b --> But undirected strong and weak nuclear, electromagnetic and gravitational forces acting on material objects that just happen to be there in one purposeless pattern or another some 13.7 BY ago, or 4.5 – 3.8 BY ago have nothing – apart from appealing to blind chance that somehow by happenstance hit on just the right configurations, that is “lucky noise” -- to give rise to language (with its use of abstract symbols and meaningful rules).

c --> And, as has already been highlighted, the functionally specific complex organisation required to do that implies configuration spaces that are so vast that the entire resources of the observed cosmos, acting since the big bang, simply could not scan enough of the space to make a difference.

d --> Therefore it is fair comment to conclude that blind, undirected physical forces on the scope of our observed cosmos have no credible power to bring about meaningfully functional information, which just happens to lie at the core of self-replicating, cell-based life.
   
e --> One typical response of the committed evolutionary materialist is to try to deny "real" meaningfulness, coding and algorithmic step by step purposeful function in the cell. But that is simply to resort to denial of patent facts evident since the DNA's structure and existence of the genetic code were identified. As Crick wrote to his son on March 19, 1953, right after making the epochal discovery:

"Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)."   
 
f --> Another is to claim that which we all know exists is only "apparent" design or purpose, but in reality is only the illusion of it. This of course begs the question and looks suspiciously like a closed mind: what reasonable degree of evidence would change such a mind to accept that the design is apparent for the very good reason that it is most likely real?
 
g --> So, we note Lewontin's clear, all too revealing implication that for the committed evolutionary materialist no degree of evidence would ever suffice  
 
. . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [[“Billions and Billions of Demons,” NYRB, Jan 1997.]

h --> Plainly, we can only correct such by resort to stronger medicine: argument that in part works by reductio ad absurdum.
 
i --> For instance, the a priori materialist cannot deny the reality of meaningful language/ information without self-contradiction: to make the required denial is . . . to necessarily use purposeful language. (So, immediately, the materialist must concede that language is based on symbolic, purposeful representation and on rules for the meaningful combination of such symbols.)

j --> Equally, the materialist cannot coherently deny that intelligences routinely create such meaningful, symbolic information. And, once a reasonable threshold of complexity is passed we have never observed such messages and languages originating by chance processes and blind forces. 
(This, for the excellent reason that such specifically functional organisation is far too isolated in the space of possible configurations to be reasonably expected to happen by chance and/or blind, undirected forces on the scope of our observed cosmos. [[And, to then resort to a proposed quasi-infinite array of “universes” is to jump to a speculation without observational evidence, i.e. philosophy not science. Worse, even such a speculation raises the question of the functional specificity and organisation of the universe-making bread factory, and the point that even in that case, the implication that our particular sub cosmos is locally finetuned would still be just as significant . . .] )
k --> But also, as Von Neumann showed, by the very need to organise a self replicating system, just such functional codes and linguistic information are in the heart of the cell. (And, without already functioning reproduction, natural selection by differential reproductive success is by definition impossible.)

l --> Thus, complex symbolic, algorithmically functional language expressed in coded symbols arranged according to rules of meaning is prior to self-replicating cell based life (and therefore any possibility of evolution by natural selection), while we only know one source for such functional complex organisation: prior intelligence.

m --> So, on inference to best explanation, evolutionary materialistic naturalism fails.   
 
But, such news is not welcome in an age dominated by just such a priori evolutionary materialistic naturalism. Consequently, the controversy over signs of intelligence and inference to design is not unexpected.    
 
However, it is not at all unusual in the history of science to see controversies over emerging theories that may well unsettle a long established scientific order. 

And, plainly, the theory of Intelligent Design (ID) is not simply the easily dismissed and demonised "Creationism in a cheap tuxedo" or “thin edge of a theocratic wedge” of ever so much anti-ID rhetoric.
(e) Towards a way forward

We have now identified the key issues, views and ideas and suggested ways to address it. That solution requires:
1 --> Clarifying origins vs. operational science, in light of philosophy of science-based, history of science anchored insights on the nature, strengths and limitations of science and its major methods.
2 --> Developing an initial critical appreciation of major alternatives, issues and trends.
3 --> Re-articulating origins science, from the ground up in a survey that covers the range from hydrogen to humans: cosmological origins, solar system origins, planetary origins, life origins, biodiversity origins, humanity origins, origin of mind.
4 --> Creating an independent mechanism for implementing such a course on a community basis.
Having covered steps 1 – 3 above, we may now look at the first specific origins science unit, Cosmology.