FOCUS: Cosmology, the study of the origin of the universe, sets the stage for all other aspects of origins science. So, properly, it is the first detailed unit. As an introduction, the way the sky appears is surveyed, using the classic celestial sphere model. Then , through the H-R diagram, the nature and life cycle of stars is explored, in light of collapsing H-gas ball models, with some reference to solar system formation. After that, galaxies and the big bang model of cosmological origins are explored. The linked question of conventional timelines for, techniques and issues/limitations on dating of the past is also briefly surveyed.
___________________
TOPICS:
INTRODUCTION --> The celestial sphere (a) Stars and solar systems --> The HR diagram and the hydrogen ball model of the stellar life cycle --> Solar system formation models (b) Galaxies, the wider observed cosmos and the fine-tuning issue --> Hubble red shifts and cosmological expansion --> The Big Bang theory --> Cosmological fine-tuning (c) Timelines --> a typical timeline --> general challenges faced by dating techniques --> specific challenges faced by dating techniques NEXT: Origin of Life |
During the day, the sky appears to be a light blue, upturned bowl, of several miles radius (i.e the distance to the local horizon). In it, the Sun moves more or less from east to west in an arc each day, and appears to be a bright yellow-white disk, a little less than the angular size of a 1-inch coin held at arm's length. Across the year, the Sun traces a path against the backdrop of the fixed stars, known as the Ecliptic.
At night, this is transformed, and we often see a blue- black backdrop with stars shining as bright, jewel- like pinpoints; scattered in a definite pattern, the constellations. If we are in the northern hemisphere, we may see how the pointer stars in The Big Dipper point to Polaris, the pole star. This star is very near to the sky's North Pole, so that it seems almost fixed in position even as the other stars apparently rotate from east to west in the course of a night; until sunrise “washes” them out. Across the months, certain stars wander against this backdrop and were thus termed “wanderers” by the ancients: planetos, i.e. “planets.” And of course the Moon rules the night sky, taking about a month to circle the sky, and going through phases as it does so.
Fig. G.3: The Celestial Sphere – the heavens as they appear from the surface of the Earth.
Fig. G.3a: "Illustration of the spherical Earth in a 14th century copy of L'Image du monde (ca. 1246)." [[NB: the cite is from Wikipedia. Correcting the C19 myth that people of the middle ages thought the earth was flat, due to ignorance-inducing dogmatic ideas. Notice, how people in the antipodes were seen as upright relative to the Earth's centre and their local surface, but inverted relative to one another in absolute space. (Indeed, viewing the image as a cartoon with a brown-cloak and a blue-cloak character, the idea of the illustration is evidently that of walking around the earth in opposite directions and meeting together face to face in the antipodes, inverted in absolute space but upright in the local space.)] (Source: Wiki.)
Fig. G.3a: "Illustration of the spherical Earth in a 14th century copy of L'Image du monde (ca. 1246)." [[NB: the cite is from Wikipedia. Correcting the C19 myth that people of the middle ages thought the earth was flat, due to ignorance-inducing dogmatic ideas. Notice, how people in the antipodes were seen as upright relative to the Earth's centre and their local surface, but inverted relative to one another in absolute space. (Indeed, viewing the image as a cartoon with a brown-cloak and a blue-cloak character, the idea of the illustration is evidently that of walking around the earth in opposite directions and meeting together face to face in the antipodes, inverted in absolute space but upright in the local space.)] (Source: Wiki.)
Wondering about and seeking to explain the heavens is one of the ancient roots of science, and it gave rise to the longest running scientific theory ever: Ptolemy's view of a cosmos with a distant celestial sphere with the stars as shining points on it. The planets (including the Sun) were seen as rotating on perfect circles, with subsidiary circles to account for times when they seem to loop back in their path across the sky. In the middle, the imperfect, changing, partly chaotic earth lies at the sump -- this is more accurate to the underlying Platonic view than “centre” suggests – of the cosmos. For, on the Platonic view, the Demiurge made a rather imperfect copy of eternal forms, from primordial matter. We and our somewhat chaotic world of change and decay are the result.
Wikipedia (here, cited as . . . per decisive evidence, speaking against the tendency of popular modern secularist thought) summarises the Ptolemaic model (named for the Greek astronomer working from its Greek capital city, Alexandria, Claudius Ptolemaeus), closely paraphrasing Toomer's translation of his major work on Astronomy, Almagest -- the dominant work on Astronomy until the 1500's:
Eventually, there were maybe eighty circles in all in a rather complex and convoluted model that led astronomers to be uneasy with what they had constructed. Earth was not the exact centre either, as Michael Flynn has aptly pointed out:
Wikipedia (here, cited as . . . per decisive evidence, speaking against the tendency of popular modern secularist thought) summarises the Ptolemaic model (named for the Greek astronomer working from its Greek capital city, Alexandria, Claudius Ptolemaeus), closely paraphrasing Toomer's translation of his major work on Astronomy, Almagest -- the dominant work on Astronomy until the 1500's:
- The celestial realm is spherical, and moves as a sphere.
- The Earth is a sphere.
- The Earth is at the center of the cosmos.
- The Earth, in relation to the distance of the fixed stars, has no appreciable size and must be treated as a mathematical point. [NB: Book I, ch 5]
- The Earth does not move.
Eventually, there were maybe eighty circles in all in a rather complex and convoluted model that led astronomers to be uneasy with what they had constructed. Earth was not the exact centre either, as Michael Flynn has aptly pointed out:
Fig G.3b: Planetary motion in the refined Ptolemaic system. The centre of the Deferent for each planet was at a mid-point between the Earth and the point where the centre of the Epicycle would appear to move at uniform speed, called the Equant (or punctum aequans). The epicycle allows for the variations in speed and replicates apparent reversals in a loop against the background stars that may be seen with superior planets, e.g. Mars. (As a mark of how impressive this 1800+ year old model is, many modern Planetariums, using gears to replicate stellar and planetary motion, effectively are implementing a mechanical analogue of the model. That is, we here have a way to do quasi-ellipses with sets of carefully arranged circles.)
Fig G3.c: Mars looping in its path against the background of stars in Taurus. (And yes planets move near the ecliptic, which is Earth's orbital plane reflected in reverse as the annual path of the Sun in the sky. Hence the obvious significance to astronomy of the twelve Zodiacal constellations, which of course was given occult significance through Astrology. As late as Kepler, Astrology sometimes paid the bills for Astronomy. Navigation and Map-making etc. also helped pay bills for science, being of both commercial and military or naval significance. To this day, astronomy and astrology are often confused in the popular mind.) [[Source: Kuhn, Thomas S., The Copernican Revolution (Harvard UP, 1957/1985), p. 48; cf. here.]
Fig G.3d: The three main contenders c. 1600, and how the situation would have looked to Tycho Brahe. (NB: This is a case where the "simplest" model and one that fitted evidence available at the time, was wrong. An improved, simplified version of a more complex model turned out to be correct. Though, final direct empirical support in key cases took until the 1800's.) Brahe hired Kepler to carry out mathematics to sort out the orbit of Mars, and in the course of his labours to sort it out, Kepler concluded that (I:) planets -- including the Earth -- orbit the Sun in elliptical orbits with the sun at one focus. (II:) The speeds of moving planets vary so that the line from the Sun to a given planet sweeps equal areas in equal times, and (III:) the square of orbital period of planets varies as the cube of the semi-major axis of the orbital ellipses. (NB: This actually PROMOTED Earth into the heavens!) Newton later showed that Kepler's laws are consequences of universal gravitation, where F = G* Mm/r^2.
Between 1543 and 1700, and amidst considerable controversy [[cf. Flynn here and here], The longstanding Ptolemaic theory . . . probably since c. 150 - 180 AD -- was replaced through advances led by Copernicus, Galileo, Brahe, Kepler and finally Newton. On the revised view . . . which had vast worldview impacts (as Kuhn discusses), the sun is a star, and earth and other planets orbit it under gravitational forces and in accordance with Newton's laws of motion. Other stars were believed to be suns, just at a great distance. Then, across the 1800's, as the wave nature of light was discovered and the spectra of light from the stars was first explored, this was confirmed. In the same century, telescopic measurements finally reached a level of precision that allowed stellar parallaxes to be definitely measured, giving us the first yardstick to the stars.
However, as the 1900's dawned, the Newtonian view of the world also turned out to be inadequate. By about 1930, Relativity theory had been introduced to explain the behaviour of fast-moving bodies, with Quantum theory explaining the strange world of the very small. One vital consequence of this was that it was learned that mass and energy are two inter-changeable forms of the same thing, so that energy and mass are related by the equation E = m*c^2. These developments paved the way for modern cosmology, starting with the nature and origin of stars:
(a) Stars and solar systems
In 1911 – 1913, Ejnar Hertzsprung and Henry Norris Russell plotted star colour/ spectral class against absolute magnitude of stars. Instead of a random scatter, they saw a pattern of distinct bands. In a modern, simplified form:
Fig. G.4: The Hertzsprung- Russell [[H-R] diagram. [[Cf. more detailed illustration and discussion of spectral classes and luminosity types here. Also, cf. a chart with specific example stars, here.] (IOSE; cf. Here)
Fig. G.4a: H-R diagrams for the M67 and NGC 188 open clusters, showing apparent cluster-age main sequence turn-offs with stars headed for the Giant bands. M67 is estimated at ~ 4BY, and NGC 188 at ~ 5 BY. (SOURCE: Wiki CCA, by Worldtraveller.) [[NB: Young Earth Creationist [[YEC] views here and here -- as well as a wider point here (cf here for background) and a new suggestion by Vardiman and Humphreys (details here) -- give interesting counter-points that should be addressed.]
Fig G.4b: The source of stellar energy, mass-energy release through nuclear fusion. Notice, how, if a star begins to burn heavier and heavier elements, it releases less energy per nucleon, and so will burn faster and faster. If massive enough, it will form an iron core, and may then undergo a supernova explosion. This is held to be the driving force for the life cycle of stars, considered as balls of largely Hydrogen gas that initially collapse under mutual gravitational attraction and thus heat up enough to trigger a fusion furnace in their cores. (Notice, also, the relatively close stability peaks of C-12 and O-16.)
Fig. G.4a: H-R diagrams for the M67 and NGC 188 open clusters, showing apparent cluster-age main sequence turn-offs with stars headed for the Giant bands. M67 is estimated at ~ 4BY, and NGC 188 at ~ 5 BY. (SOURCE: Wiki CCA, by Worldtraveller.) [[NB: Young Earth Creationist [[YEC] views here and here -- as well as a wider point here (cf here for background) and a new suggestion by Vardiman and Humphreys (details here) -- give interesting counter-points that should be addressed.]
Fig G.4b: The source of stellar energy, mass-energy release through nuclear fusion. Notice, how, if a star begins to burn heavier and heavier elements, it releases less energy per nucleon, and so will burn faster and faster. If massive enough, it will form an iron core, and may then undergo a supernova explosion. This is held to be the driving force for the life cycle of stars, considered as balls of largely Hydrogen gas that initially collapse under mutual gravitational attraction and thus heat up enough to trigger a fusion furnace in their cores. (Notice, also, the relatively close stability peaks of C-12 and O-16.)
Using the Sun (often called "Sol" and represented by a circle with a dot in the centre) as a point of reference, and making particular reference to an excellent tutorial at the Australia telescope outreach site:
1 --> Stars tend to cluster in four bands; the major one being an "S"- shaped main sequence.
2 --> Our sun sits on this sequence as a G2 (yellowy-white) spectral class star of absolute magnitude about 4.83. Its surface temperature is about 5,780 K. (NB: Absolute magnitude is the magnitude of a star seen at a standard distance of 10 parsecs.)
3 --> This main sequence is believed to be the result of hydrogen-rich gas balls collapsing from clouds similar to observed interstellar scale giant molecular clouds (e.g. in Orion).
3 --> This main sequence is believed to be the result of hydrogen-rich gas balls collapsing from clouds similar to observed interstellar scale giant molecular clouds (e.g. in Orion).
4 --> Such a gas ball would heat up as it clumps together; similar to how a bicycle pump heats up as we compress the gas inside.
5 --> That would happen until the gas ball is dense enough and hot enough for Hydrogen nuclei to fuse to create Helium nuclei, releasing energy due to the resulting loss of mass under the Einstein mass-energy relationship: E = m*c^2 .
6 --> A star is born, and the radiation pressure from the photons helps to stabilise the newly born model star against further collapse. (Video summary of the typical current view follows.)
7 --> The more massive the gas ball, the hotter the resulting core temperature, and the brighter and bluer is the resulting model star's surface, giving us a relationship between luminosity L and mass M: L/Lsol = (M/Msol)n where: n ranges from 2.5 to 4 depending on where the star sits on the “S.”
8 --> The model predicts that hot large stars burn up relatively quickly, and cool small ones last much longer.
9 --> In this light, our sun, a G2 dwarf, is thought to be about half way through a 10- billion year main sequence lifespan.
10 --> In light of observations of apparent planet forming disks in Orion, it is also thought that the planets formed from such a disk; difficulties with angular momentum distribution notwithstanding.
11 --> However, relatively few long-lived model stars would be in zones of spiral galaxies that have enough heavy elements while being sufficiently isolated to have stable inter-stellar environments.
12 --> Worse, the relatively high fraction of observed hot Jupiter exoplanets moving in orbits close-in to their parent stars – many of which have sharply misaligned orbits – point to terrestrial planets similar to our own being relatively rare.
Q: Why is that?
A: Because, hot Jupiters are believed to form beyond the “frost line” of a stellar system and to migrate inward (to as close in as ~ 0.1 AU from the star) through having their orbits disrupted, most likely by other gas giants. (This will also trigger a general process of realignment and bombardment of planets in a planetary system.) But, for the apparently high proportion of such “roasters” that have sharply misaligned orbits, Didier Queloz of Geneva Observatory has suggested that "[[a] dramatic side-effect of this process is that it would wipe out any other smaller Earth-like planet in these systems."
13 --> This lends support to the contention that long lifetime stable stars within galactic habitable zones having terrestrial planets orbiting in circumstellar habitable zones, with protective gas giants beyond the frost line and with large moons stabilising their rotational axes will be fairly rare. In short, it is increasingly credible that ours is indeed a “privileged planet.” (Privileged Planet's Amazon page. Full Youtube P/P video.) Video:
14 --> After burning up its core, model stars would undergo cycles of collapse and burning of elements in the shell around the core, or if they get hot enough, ignition of further elements in the core.
15 --> Such stars become giants and/or super-giants, with many possible cycles of burning, until in some cases -- if big enough -- they form an iron core, which will no longer release energy by fusion. This would set the stage for a supernova, cf. 17 below.
16 --> Smaller model stars end up as white dwarfs -- the upper mass limit of these being 1.44 solar masses -- which then cool down only very slowly from about 18,000 °F (~ 10,000 °C). Beyond that limit, the atoms are unable to resist crunching down to form a neutron star.
17 --> If a star is big enough, at least ~ 9 solar masses, it may eventually form Iron, an element that will not release net energy if its atomic nuclei fuse to form yet heavier elements. These stars undergo a sudden collapse and explosion, emitting a flash of light that can be brighter than a galaxy, i.e. a supernova, specifically a Type II.
(White dwarfs that accrete materials from companions can also undergo a supernova. Beyond this, very large stars ~ 130 or 140 - 250 solar masses, with very low heavy element content, are believed to undergo pair instability supernovas; whereby high energy gamma photons form positron-electron pairs through collisions with nuclei, sharply reducing the pressure of the star, then leading to gravitational collapse and a runaway fusion explosion that blows the star apart, scattering atomic materials for later generations of stars. These would be the suggested [[not yet definitively observed] "short" lifespan [[~ 1 MY] Population III stars. They are suggested to have been typical of the very first stars, starting about 400 MY after the initial singularity.)
18 --> Supernova remnant cores are believed to form neutron stars and – for stars that were about 20 – 40+ solar masses -- black holes. (Above that range, the star goes straight to a black hole without a supernova.)
19 --> Heavy elements formed in supernovas and ejected onto the inter-stellar medium then go into forming later generations of stars, which can form earthy, terrestrial planets. (These are the prime candidate sites for intelligent life in the cosmos. Cf. here on the Drake equation.)
Such star lifespan models have a fairly good fit to the observed spectral class-luminosity patterns of stellar clusters (especially the observed "turn-off" branching of their HR diagrams from the main sequence heading for the giants bands, consistent with an age ranging up to ~ 1 - 10 BY [~ 4 BY for M67 in the linked] based on H-ball star models). Similarly, pulsars are believed to be rotating neutron stars, the companion star to Sirius A is thought to be a white dwarf, and Cygnus X-1 has been thought to be a possible black hole. Apparent solar system formation disks have also been observed.
The H-R diagram model therefore lends considerable plausibility to the cosmological timeline, as it fits into the 10 – 20 BY window that current models of cosmological origins point to. However, even though it is also plausible that as we look into the deep sky we are looking far back in time [[due to the presumed time for light to make the transit to us], we must recognise that the models, observations and measurements embed a ladder of hypotheses and inferences (e.g. to measure stellar and galactic distances), so our findings are not absolute truths.
(b) Galaxies, the wider observed cosmos and the fine-tuning issue
Up to the night when Galileo turned his telescope on the Milky Way -- a pale band of bluish, milky-white starlight (with visible dark bands in it) -- it had been thought that the earth, the solar system, the stars and the Milky Way were the main features of the universe. But Galileo's telescope revealed that the Milky Way was a band of faint stars. Then, as “spiral nebulae” were studied, Kant and others suggested that the Milky Way was a more or less spiral disk of ~ 2 * 10^11 stars: an "island universe." The spiral nebulae were eventually suggested to be other similarly vast star systems.
The gross structure of the visible universe is dominated by galaxies, which are often clustered. There are about 1011 such galaxies. These range from ~ 107 to 1012 stars with star systems — often multiple -- separated on the order of a parsec. Galaxies tend to be about 1 – 10 Mpc apart, and are typically 1,000 - 100, 000 parsecs in diameter. Some are in collision one with the other. Most are elliptical, some are spiral, others are barred spiral (i.e. the central bulge is bar-like, not ball-like). Yet others are peculiar or irregular (including some that appear to be exploding).
Our own, barred-spiral galaxy, is about 30 k pc across, and about 300 - 600 pc thick, with its central bulge of > 1,000 pc thickness in the direction of the constellation Sagittarius from earth. (Cf also here.) That central bulge is thought to contain a super-massive black hole; X-ray source Sagittarius A*.
The disk is at a sharp angle relative to both the celestial equator and the ecliptic (the path of the sun through the sky). The solar system is on the inner rim of a minor arm, the Orion arm (or, the local spur), and is orbiting in a nearly circular orbit of radius ~ 8 k pc, at about 220 km/s; with orbital period 225 - 250 My:
Studies of the orbital paths and speeds of stars also suggest that some 90% of the mass of a galaxy such as ours is in dark matter, of presently unknown composition. (Dark energy is suggested as an explanation for the observed accelerating expansion of the universe. Between the two, respectively 22% and 74% of the energy density of the cosmos is covered, leaving but 4% for "ordinary" matter. In turn, 90% of this 1/25th is believed to be intergalactic gases, leaving only 0.4% of the mass of the cosmos as stars etc. )
It was observed in the 1920's, that the light from most galaxies is red-shifted, understood to mean that on the whole they are moving away from us. The further away a galaxy is, the faster the speed of separation. This fits in with predictions from the General Theory of Relativity, and has led to the model of an expanding universe where space itself is spreading apart, stretching out light and the separation of galaxies.
Where v is speed of recession of a galaxy [estimated from the Doppler shift of light spectral lines: redwards for recession, hence red-shift]; and where D is the separation of a remote galaxy from us in our galaxy; and where also H0 is the Hubble parameter for a given “time” (it is not actually constant), we may plot the relationship:
. . . in the appropriate units.
We can think of this (rather crudely) as our sitting in a speck on the surface of a balloon being blown up, and seeing other specks spreading away from us. Or, similarly, we can imagine ourselves as sitting on a raisin in a transparent loaf of bread being baked, and expanding as it bakes; seeing the other raisins spread away from us.
Thus, we come to the Big Bang theory, as originally proposed in the 1920's and as generally accepted since the discovery of 2.725 K cosmic microwave background black body radiation [[peaking at 160.2 GHz or 1.9 mm] in the 1960's:
A helpfully simple overview of the theory is:
Running the Hubble expansion expression and associated models back-ways gives us the estimate that the observed universe began in a “singularity” -- effectively, a “point” -- 13.7 billion years ago, when D falls to zero. This number is taken to be the “age” of the observed universe; which, due to various related factors, is also taken to be 40-odd billion LY across -- as far as we can “see” or infer based on what we can “see.” (NB: The scope of the observed universe; which is not to be equated with all that there is; i.e. reality.)
v = H0D
. . . in the appropriate units.
We can think of this (rather crudely) as our sitting in a speck on the surface of a balloon being blown up, and seeing other specks spreading away from us. Or, similarly, we can imagine ourselves as sitting on a raisin in a transparent loaf of bread being baked, and expanding as it bakes; seeing the other raisins spread away from us.
Thus, we come to the Big Bang theory, as originally proposed in the 1920's and as generally accepted since the discovery of 2.725 K cosmic microwave background black body radiation [[peaking at 160.2 GHz or 1.9 mm] in the 1960's:
Fig.G.5b: The Big Bang cosmological expansion. (Source: Wiki, GNU.)
[[NB: one common inference from the model is that the farther out one looks, the farther back in time towards the universe in its early days. So, our observational picture of the sky is not a snapshot of the cosmos at this point in its timeline, but a cross-section that looks more and more back in time as one goes out. So, the current "distance record" seen by the Hubble Space telescope, 12 billion light years, is held to be as picture of events far back in the cosmos' past.]
Fig G.5(c): A model cosmological development 13.7 BY timeline, showing key events such as inflation [[cf. details here], the first stars, formation of galaxies, and the development of planets etc. It also shows the current inferred accelerating expansion thought to be due to dark energy. (Source: NASA public domain, via Wiki.)
Fig. G.5(d): A video simulation of the Big Bang model of cosmological origins, with a cosmos tour
A helpfully simple overview of the theory is:
. . . the universe, originally in an extremely hot and dense state that expanded rapidly, has since cooled by expanding to the present diluted state, and continues to expand today. Based on the best available measurements as of 2010[update], the original state of the universe existed around 13.7 billion years ago, which is often referred to as the time when the Big Bang occurred . . . .In short, the Big Bang theory starts from the moment of initial expansion from the initial singularity. Thus, it implies that our cosmos has a beginning, and is therefore contingent on whatever factors set up the circumstance where that initial expansion began. (Such factors are causal in nature. And, that a beginning suggests a "begin-ner," is one reason why some scientists initially resisted the Big Bang hypothesis.)
Georges Lemaître proposed what became known as the Big Bang theory of the origin of the universe, although he called it his "hypothesis of the primeval atom". The framework for the model relies on Albert Einstein's general relativity and on simplifying assumptions (such as homogeneity and isotropy of space [[an expansion of the Copernican principle, so called: i.e. our view of the cosmos is assumed "typical," not special]). The governing equations had been formulated by Alexander Friedmann. After Edwin Hubble discovered in 1929 that the distances to far away galaxies were generally proportional to their redshifts, as suggested by Lemaître in 1927, this observation was taken to indicate that all very distant galaxies and clusters have an apparent velocity directly away from our vantage point: the farther away, the higher the apparent velocity.
The observed abundances of the light elements throughout the cosmos closely match the calculated predictions for the formation of these elements from nuclear processes in the rapidly expanding and cooling first minutes of the universe, as logically and quantitatively detailed according to If the distance between galaxy clusters is increasing today, everything must have been closer together in the past. This idea has been considered in detail back in time to extreme densities and temperatures, and large particle accelerators have been built to experiment on and test such conditions, resulting in significant confirmation of the theory, but these accelerators have limited capabilities to probe into such high energy regimes. Without any evidence associated with the earliest instant of the expansion, the Big Bang theory cannot and does not provide any explanation for such an initial condition; rather, it describes and explains the general evolution of the universe since that instant. The observed abundances of the light elements throughout the cosmos closely match the calculated predictions for the formation of these elements from nuclear processes in the rapidly expanding and cooling first minutes of the universe, as logically and quantitatively detailed . . . .
After the discovery of the cosmic microwave background radiation in 1964, and especially when its spectrum (i.e., the amount of radiation measured at each wavelength) was found to match that of thermal radiation from a black body, most scientists were fairly convinced by the evidence that some version of the Big Bang scenario must have occurred . . .
Running the Hubble expansion expression and associated models back-ways gives us the estimate that the observed universe began in a “singularity” -- effectively, a “point” -- 13.7 billion years ago, when D falls to zero. This number is taken to be the “age” of the observed universe; which, due to various related factors, is also taken to be 40-odd billion LY across -- as far as we can “see” or infer based on what we can “see.” (NB: The scope of the observed universe; which is not to be equated with all that there is; i.e. reality.)
It also turns out that big bang models give us several dozen parameters, that in aggregate seem to hold a fairly narrow cluster of values to end up with a universe hospitable to the sort of cell based life we represent. For instance, taking just five:
Parameter
|
Max. Deviation*
|
Estimated number of "required" bits
|
---|---|---|
Ratio of Electrons:Protons
|
1:1037
|
123
|
Ratio of Electromagnetic Force:Gravity
|
1:1040
|
133
|
Expansion Rate of Universe
|
1:1055
|
183
|
Mass of Universe1
|
1:1059
|
196
|
1:10120
|
399
| |
*These
numbers represent the maximum deviation from the accepted values,
that would either prevent the universe from existing now, not having
matter, or be unsuitable for any atom-based form of life.
|
TOTAL:
1,034
|
Table G.1: Degree of Fine-tuning of five key parameters of the observed cosmos. [[Adapted: Deem, R, of RTB. Ref 1 links onward to Prof Ed White of UCLA. NB: Recently, prof Don Page of U of Alberta has objected to the fine-tuning of the cosmological constant; but did so by proposing a similarly fine-tuned negative range: between zero and – LO ~ 3.5 * 10^- 122. Cf. response here. Cf. as well the introduction to the fine tuning cosmological design issue at the blog Uncommon Descent here, its footnotes and in particular technical survey by Luke Barnes here. The onward worldviews level discussion here on in context, will also be helpful for those interested in that broader context.)]
In short, there are dozens of parameters that are sufficiently finely set to point to functionally specific, complex information within the system of physics required for a big bang world habitable to Carbon-chemistry, cell based life. On fair comment, then, our observed cosmos is fine-tuned in ways that set up habitable zones for such life in galaxies and solar systems. Thus, a secondary debate has developed on whether this points to a guiding hand, or if it can be put down to happy coincidence.
As Robin Collins put the case in summary, in a classic essay on The Fine-tuning Design Argument (1998):
As Robin Collins put the case in summary, in a classic essay on The Fine-tuning Design Argument (1998):
Suppose we went on a mission to Mars, and found a domed structure in which everything was set up just right for life to exist. The temperature, for example, was set around 70 °F and the humidity was at 50%; moreover, there was an oxygen recycling system, an energy gathering system, and a whole system for the production of food. Put simply, the domed structure appeared to be a fully functioning biosphere. What conclusion would we draw from finding this structure? Would we draw the conclusion that it just happened to form by chance? Certainly not. Instead, we would unanimously conclude that it was designed by some intelligent being. Why would we draw this conclusion? Because an intelligent designer appears to be the only plausible explanation for the existence of the structure. That is, the only alternative explanation we can think of--that the structure was formed by some natural process--seems extremely unlikely. Of course, it is possible that, for example, through some volcanic eruption various metals and other compounds could have formed, and then separated out in just the right way to produce the "biosphere," but such a scenario strikes us as extraordinarily unlikely, thus making this alternative explanation unbelievable.
The universe is analogous to such a "biosphere," according to recent findings in physics . . . . Scientists call this extraordinary balancing of the parameters of physics and the initial conditions of the universe the "fine-tuning of the cosmos" . . . For example, theoretical physicist and popular science writer Paul Davies--whose early writings were not particularly sympathetic to theism--claims that with regard to basic structure of the universe, "the impression of design is overwhelming" (Davies, 1988, p. 203) . . .
A video summary by Robin Collins (NB: a low-res sample from a DVD Product available from http://www.arn.org. This program was recorded at the "Intelligent Design and the Future of Science" conference held at Biola University, April 22-24, 2004):[[Cf. also here. Short summary here. Elsewhere, Collins notes how noted cosmologist Roger Penrose has estimated that "[[i]in order to produce a universe resembling the one in which we live, the Creator would have to aim for an absurdly tiny volume of the phase space of possible universes -- about 1/(10^(10^123)) of the entire volume . . ." That is, 1 divided by 10 followed by one less than 10^123 zeros. By a long shot, there are not enough atoms in the observed universe [~10^80] to fully write out the fraction.]
This summary clip from Privileged Planet may also be helpful ( Amazon page):
Answers -- 101 level --to some common (but weak) objections:
Since (as the above table shows) we are well beyond 1,000 bits of functional specificity, the origin of such fine tuning by forces of chance and mechanical necessity -- though logically possible -- is not a plausible default; unless one is willing to follow Lewontin and others, and a priori stipulate that science must presume materialism. Oddly, some would object that the fine-tuning is driven by a super-law of necessity that "sets the radio dials" for physics, failing to appreciate that this simply moves the fine-tuning up one level.
A more serious objection is the multiverse one, that in effect there is a quasi-infinite wider cosmos as a whole, in which our particular sub-cosmos has popped up more or less at random alongside countless others, and got lucky with its parameters. John Leslie's observation in response is telling, in his well known analogy of the fly on the wall:
. . . the need for such explanations [[for fine-tuning] does not depend on any estimate of how many universes would be observer-permitting, out of the entire field of possible universes. Claiming that our universe is ‘fine tuned for observers’, we base our claim on how life’s evolution would apparently have been rendered utterly impossible by comparatively minor [[emphasis original] alterations in physical force strengths, elementary particle masses and so forth. There is no need for us to ask whether very great alterations in these affairs would have rendered it fully possible once more, let alone whether physical worlds conforming to very different laws could have been observer-permitting without being in any way fine tuned. Here it can be useful to think of a fly on a wall, surrounded by an empty region. A bullet hits the fly Two explanations suggest themselves. Perhaps many bullets are hitting the wall or perhaps a marksman fired the bullet. There is no need to ask whether distant areas of the wall, or other quite different walls, are covered with flies so that more or less any bullet striking there would have hit one. The important point is that the local area contains just the one fly.
[[Our Place in the Cosmos, 1998. The force of this point is deepened once we think about what has to be done to get a rifle into "tack-driving" condition. That is, a "tack-driving" rifle is a classic example of a finely tuned, complex system, i.e. we are back at the force of Collins' point on a multiverse model needing a well adjusted Cosmos bakery. (Slide show, ppt. "Simple" summary, doc.)]
Walter Bradley gives the wider context, by laying out some general "engineering requisites" for a life-habitable universe; design specifications, so to speak:
- Order to provide the stable environment that is conducive to the development of life, but with just enough chaotic behavior to provide a driving force for change.
- Sufficient chemical stability and elemental diversity to build the complex molecules necessary for essential life functions: processing energy, storing information, and replicating. A universe of just hydrogen and helium will not "work."
- Predictability in chemical reactions, allowing compounds to form from the various elements.
- A "universal connector," an element that is essential for the molecules of life. It must have the chemical property that permits it to react readily with almost all other elements, forming bonds that are stable, but not too stable, so disassembly is also possible. Carbon is the only element in our periodic chart that satisfies this requirement.
- A "universal solvent" in which the chemistry of life can unfold. Since chemical reactions are too slow in the solid state, and complex life would not likely be sustained as a gas, there is a need for a liquid element or compound that readily dissolves both the reactants and the reaction products essential to living systems: namely, a liquid with the properties of water. [[Added note: Water requires both hydrogen and oxygen.]
- A stable source of energy to sustain living systems in which there must be photons from the sun with sufficient energy to drive organic, chemical reactions, but not so energetic as to destroy organic molecules (as in the case of highly energetic ultraviolet radiation). [[Emphases added.]
Such requisites met in the context of a finely tuned observed cosmos plainly make design of the cosmos a plausible view, even if it is in the context of what has been termed a multiverse.
To bring out just one aspect, let us note that the three most common atoms in life are Carbon, Hydrogen and Oxygen. For instance, H and O make water, the three-atom universal solvent that is so adaptable to the needs of the living cell, and to making a terrestrial planet a good home for life.
As D. Halsmer, J. Asper, N. Roman, T. Todd observe of this wonder molecule:
Moreover, the authors also note how C, H and O just happen to be the fourth, first and third most abundant atoms in the cosmos, helium --the first noble gas -- being number two. This -- again on fundamental parameters and laws of our cosmos -- does not suggest a mere accident of happy coincidence:
To bring out just one aspect, let us note that the three most common atoms in life are Carbon, Hydrogen and Oxygen. For instance, H and O make water, the three-atom universal solvent that is so adaptable to the needs of the living cell, and to making a terrestrial planet a good home for life.
As D. Halsmer, J. Asper, N. Roman, T. Todd observe of this wonder molecule:
The remarkable properties of water are numerous. Its very high specific heat maintains relatively stable temperatures both in oceans and organisms. As a liquid, its thermal conductivity is four times any other common liquid, which makes it possible for cells to efficiently distribute heat. On the other hand, ice has a low thermal conductivity, making it a good thermal shield in high latitudes. A latent heat of fusion only surpassed by that of ammonia tends to keep water in liquid form and creates a natural thermostat at 0°C. Likewise, the highest latent heat of vaporization of any substance - more than five times the energy required to heat the same amount of water from 0°C-100°C - allows water vapor to store large amounts of heat in the atmosphere. This very high latent heat of vaporization is also vital biologically because at body temperature or above, the only way for a person to dissipate heat is to sweat it off.In short, the elegantly simple water molecule is set to a finely balanced, life-facilitating operating point, based on fundamental forces and parameters of the cosmos. Forces that had to be built in from the formation of the cosmos itself. Which fine-tuning from the outset, therefore strongly suggests a purpose to create life in the cosmos from its beginning.
Water's remarkable capabilities are definitely not only thermal. A high vapor tension allows air to hold more moisture, which enables precipitation. Water's great surface tension is necessary for good capillary effect for tall plants, and it allows soil to hold more water. Water's low viscosity makes it possible for blood to flow through small capillaries. A very well documented anomaly is that water expands into the solid state, which keeps ice on the surface of the oceans instead of accumulating on the ocean floor. Possibly the most important trait of water is its unrivaled solvency abilities, which allow it to transport great amounts of minerals to immobile organisms and also hold all of the contents of blood. It is also only mildly reactive, which keeps it from harmfully reacting as it dissolves substances. Recent research has revealed how water acts as an efficient lubricator in many biological systems from snails to human digestion. By itself, water is not very effective in this role, but it works well with certain additives, such as some glycoproteins. The sum of these traits makes water an ideal medium for life. Literally, every property of water is suited for supporting life. It is no wonder why liquid water is the first requirement in the search for extraterrestrial intelligence.
All these traits are contained in a simple molecule of only three atoms. One of the most difficult tasks for an engineer is to design for multiple criteria at once. ... Satisfying all these criteria in one simple design is an engineering marvel. Also, the design process goes very deep since many characteristics would necessarily be changed if one were to alter fundamental physical properties such as the strong nuclear force or the size of the electron. [["The Coherence of an Engineered World," International Journal of Design & Nature and Ecodynamics, Vol. 4(1):47-65 (2009). HT: ENV.]
Moreover, the authors also note how C, H and O just happen to be the fourth, first and third most abundant atoms in the cosmos, helium --the first noble gas -- being number two. This -- again on fundamental parameters and laws of our cosmos -- does not suggest a mere accident of happy coincidence:
The explanation has to do with fusion within stars. Early [[stellar, nuclear fusion] reactions start with hydrogen atoms and then produce deuterium (mass 2), tritium (mass 3), and alpha particles (mass 4), but no stable mass 5 exists. This limits the creation of heavy elements and was considered one of "God's mistakes" until further investigation. In actuality, the lack of a stable mass 5 necessitates bigger jumps of four which lead to carbon (mass 12) and oxygen (mass 16). Otherwise, the reactions would have climbed right up the periodic table in mass steps of one (until iron, which is the cutoff above which fusion requires energy rather than creating it). The process would have left oxygen and carbon no more abundant than any other element.
Sir Fred Hoyle also recognised this issue of the resonances that make C and O so abundant, and in balance [sensitive to within +/- 4%]. He therefore commented, bearing in mind the connecting link properties of Carbon:
From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has "monkeyed" with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.Cited, Bradley, in "Is There Scientific Evidence for the Existence of God? How the Recent Discoveries Support a Designed Universe". Emphasis added.]This seems to have originally appeared as the conclusion to a talk given at Caltech in 1981 or thereabouts. Earlier in the talk, he elaborated on Carbon and the chemistry of life, especially enzymes:
The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn't so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn't give. The case of the enzymes is well known . . . If amino acids were linked at random, there would be a vast number of arrange-ments that would be useless in serving the pur-poses of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link,it's easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem - the information problem . . . .No wonder, in that same talk, Hoyle also added:
I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn't convince myself that even the whole universe would be sufficient to find life by random processes - by what are called the blind forces of nature . . . . By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes . . . .
Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.
I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [["The Universe: Past and Present Reflections." Engineering and Science, November, 1981. pp. 8–12]
Hugh Ross aptly uses the picture of tuning the resonance circuits of a radio:
(If you wish to consult a more detailed, technical survey, I suggest here.)
The timeline for origins is a linked issue, and so we now turn to it:
As you tune your radio, there are certain frequencies where the circuit has just the right resonance and you lock onto a station. The internal structure of an atomic nucleus is something like that, with specific energy or resonance levels. If two nuclear fragments collide with a resulting energy that just matches a resonance level, they will tend to stick and form a stable nucleus. Behold! Cosmic alchemy will occur! In the carbon atom, the resonance just happens to match the combined energy of the beryllium atom and a colliding helium nucleus. Without it, there would be relatively few carbon atoms. Similarly, the internal details of the oxygen nucleus play a critical role. Oxygen can be formed by combining helium and carbon nuclei, but the corresponding resonance level in the oxygen nucleus is half a percent too low for the combination to stay together easily. Had the resonance level in the carbon been 4 percent lower, there would be essentially no carbon. Had that level in the oxygen been only half a percent higher, virtually all the carbon would have been converted to oxygen. Without that carbon abundance, neither you nor I would be here. [[Beyond the Cosmos (Colorado Springs, Colo.: NavPress Publishing Group, 1996), pg. 32. HT: IDEA.]
(If you wish to consult a more detailed, technical survey, I suggest here.)
The timeline for origins is a linked issue, and so we now turn to it:
The observed cosmos is often held to be about 10 – 20 billion years old, and our galaxy within it, about 12BY. Our solar system is estimated to be 4.5 – 5 BY, and the earth within it maybe 4.6 BY, with life on earth now being dated to 3.8 - 4.2 BYA.
Such estimates, however, are based on historic and current observations, techniques, measurements and models, projected far beyond the ~ 4 – 5,000 years of recorded history. In effect, the basic idea is that if a candle currently burns at a given rate, we can estimate the rate over time and if we also know the initial length, we can calculate how long it has been burning. But, by the nature of the case, such assumptions and estimates are hard to test and a certain measure of circularity can easily creep in.
Such estimates, however, are based on historic and current observations, techniques, measurements and models, projected far beyond the ~ 4 – 5,000 years of recorded history. In effect, the basic idea is that if a candle currently burns at a given rate, we can estimate the rate over time and if we also know the initial length, we can calculate how long it has been burning. But, by the nature of the case, such assumptions and estimates are hard to test and a certain measure of circularity can easily creep in.
The “standard” modern timeline of the earth, was first based on applying age estimates to the standard sequence of sedimentary rock strata and fossil life correlations built up by geologists etc. (For instance, it was thought that deep sea sediments averaged perhaps 1 – 2 cm (= ½ - ¾ inches) of deposition per thousand years, with the lower end being initially favoured.) By about 1917, Barrell had critiqued this against radioactivity based age estimates. In the decades since, the following has become more or less a "standard" -- a perhaps inadvertently telling word -- framework:
- TIMEEVENTS13.73 +/- 0.12 BYACosmic singularity, aka the big bang: Universe as we know it begins. Sometimes, more broadly dated as 10 - 20 BYA.13.7 - 13.1 BYAFormation of Hydrogen atoms, stars and galaxies; later, through the explosion of large stars heavy elements form, required for rocky planets.5 - 4.6 BYASun forms, then planet Earth; Moon forms by collision with a Mars-sized planet in orbital lock with the Sun-Earth System.3.8 - 3.5 BYAFirst, single-celled, life forms originate through chemical evolution, shortly after the major asteroid bombardment era ends1,000 - 500 MYAMulticellular life forms emerge after single celled life forms transform the atmosphere to support oxygen-dependent life forms. Cambrian fossil life revolution.475 - 300 MYAPlants, Fishes, Amphibians, Reptiles, colonisation of land250 - 200 MYAPermian-Triassic mass extinction, after which reptiles that evolve into crocodilians, dinosaurs and birds emerge. The first mammals appear.200 - 65 MYADinosaur age. At 65 MYA the Cretacious-Tertiary mass extinction is triggered by a meteorite hitting the Gulf of Mexico. After this, mammals dominate the land.10 - 1.8 MYAApes, then primitive men emerge130 - 100 TYANeandertals, then “modern” Homo sapiens sapiens, emerge.50 - 27 TYA“Modern” men colonise the continents, and the neandertals die out.15 - 10TYAAgriculture, settlement, domestication of animals, cereal crops, cities; if certain -- fairly speculative (& somewhat controversial) -- interpretations of stone age cave paintings are correct, in the band ~ 10 - 15 TYA (e.g. here, here and here), we have correlations to within some thousands of years between C-14 dates and general astronomical eras in light of proper motions of stars, precession of the Earth's rotational axis in space [[a period of ~ 26,000 y], the resulting movement of the location of the Northern Hemisphere Vernal Equinox from Zodiacal constellation to constellation, and the specific shapes of constellations.
4 - 5 TYA Recorded History begins, chronologies fill out with details and become relatively precise c. 1- 2000 BC; of course there are gaps. From about 1,000 BC on chronology is more or less continuous and probably reasonably reliable.2 TYABeginnings of so-called "Common Era" (CE)0.6 - 0 TYAThe modern era, starting with voyages of discovery.
Table G.2: A “standard” timeline for the earth (Adapted: Wikipedia. [Those wishing to compare Bible-based timeline models of the past may wish to look here.])
Table G.3: A "simple" summary of the Geologic Column, with eras, main periods and conventional dates. (Source: US Parks Service)
Name | Location | Out of place layers | Description |
Qilian Shan | North / West China | Ordovician over Pliocene 505 million - 5.1 million | Ordovician strata is over Pliocene gravel with a valley filled with Pleistocene gravel [[NB: "If the Pliocene and Pleistocene material was gravel to start with, both would have tended to be pushed aside" during the proposed overthrusting episode. The source suggests: "The layers seem to have been laid down together, with gravel under solid rock."] |
Lewis Overthrust | Montana, USA | Precambrian over Cretaceous 644 million - 144 million | 350 miles and 15-30 miles wide and goes from Glacier National Park to Alberta, Canada. However there is a fault line.[[*] |
Franklin Mountains | Near El Paso, Texas, at West Crazy Cat Canyon | Ordovician over Cretaceous 450 million - 130 million | No physical evidence of an overthrust. |
The Glarus Overthrust | Near Schwanden, Switzerland | Permian - Jurassic - Eocene supposed to be Eocene - Jurassic - Permian | 21 miles long. An overthrust is assumed because the fossils are out of place |
Empire Mountains | Southern Arizona, USA | Permian over Cretaceous 286 million - 144 million | Contact is like gear meshing. Sliding would grind off lower formation's projections. |
Mythen Peak | The Alps | Cretaceous over Eocene 200 million - 60 million | Older rock allegedly pushed all the way from Africa |
Heart Mountain | Wyoming, USA | Paleozoic - Jurassic - Tertiary - Paleozoic supposed to be Tertiary - Jurassic - Paleozoic | Fossils in the wrong order "big time" |
Matterhorn | The Alps | Eocene - Triassic - Jurassic - Cretaceous; supposed to be Triassic - Jurassic - Cretaceous - Eocene | Alleged to have been thrusted 60 miles |
Table G.4: A list of anomalous geological rock layers, often explained on the key Lewis Overthrust case, but with challenges as noted, cf. here and here. (Source: Creationwiki, fair use.)
_________________
* Excerpting the main text, the source (a critique) adds: "In some places the Lewis Overthrust has a well defined line between the Precambrian and Cretaceous. These places lack the deformation and rubble that would result from an overthrust. Some places do have such deformation and there is a fault line which shows there is tectonic activity . . . " A possible case is conceded, but it is argued that alternatively, "the fault" results "from subsequent tectonic activity."
Science writer Richard Milton (i.e. "a man with nothing to lose" who was secretly approached by scientists and medical men with relevant concerns and evidence, in the face of a climate of hostility) has given a simple, common-sense based summary of the -- too often unacknowledged or even militantly denied and hotly dismissed -- inherent challenges and limitations faced by dating methods and schemes that try to reconstruct the timeline of our planet's remote, unobservable deep past:
[[1 Untestability/ Circularity:] . . . the overwhelming majority of [[radioactive] dates could never be challenged or found to be flawed since there is no genuinely independent evidence that can contradict those dates . . . .
[[2 Ballpark thinking:] Any dating scientist who suggested looking outside of [[the standard] ballpark . . . would be looked on as a crackpot by his colleagues. More significantly, he would not be able to get any funding for his research . . . .
[[3 Intellectual phase-locking:] . . . all scientists make experimental errors that they have to correct. They naturally prefer to correct them in the direction of the currently accepted value thus giving an unconscious trend to measured values . . . . [[Emphasis original]
[[4 Conformity to consensus:] Take for example a rock sample from the late Cretaceous, a period which is universally believed to date from some 65 million years ago. Any dating scientist who obtained a date from the sample of, say, 10 million years or 150 million years, would not publish such a result because he or she will, quite sincerely, assume it was in error. On the other hand, any dating scientist who did obtain a date of 65 million years would hasten to publish it . . . [[Shattering the Myths of Darwinism (Park Street Press, 1997), pp. 50 – 51. Cf this critical review of the geo-dating game across the past 100+ years, here.]
[[2 Ballpark thinking:] Any dating scientist who suggested looking outside of [[the standard] ballpark . . . would be looked on as a crackpot by his colleagues. More significantly, he would not be able to get any funding for his research . . . .
[[3 Intellectual phase-locking:] . . . all scientists make experimental errors that they have to correct. They naturally prefer to correct them in the direction of the currently accepted value thus giving an unconscious trend to measured values . . . . [[Emphasis original]
[[4 Conformity to consensus:] Take for example a rock sample from the late Cretaceous, a period which is universally believed to date from some 65 million years ago. Any dating scientist who obtained a date from the sample of, say, 10 million years or 150 million years, would not publish such a result because he or she will, quite sincerely, assume it was in error. On the other hand, any dating scientist who did obtain a date of 65 million years would hasten to publish it . . . [[Shattering the Myths of Darwinism (Park Street Press, 1997), pp. 50 – 51. Cf this critical review of the geo-dating game across the past 100+ years, here.]
Such problems point to a tendency to conform to a circular pattern of thought insulated from truly independent cross-checks against empirical facts.
The case of the dating of KNM-ER1470, as summarised by Lubenow, should give us all a sobering pause for thought:
The case of the dating of KNM-ER1470, as summarised by Lubenow, should give us all a sobering pause for thought:
A very popular myth is that the radioactive dating methods are an independent confirmation of the geologic time scale and the concept of human evolution. This myth includes the idea that the various dating methods are independent of one another and hence act as controls . . . Perhaps the best way to expose this myth for what it is—science fiction—is to present a case study of the dating of the East African KBS Tuff strata and the famous fossil KNM-ER 1470, as recorded in the scientific journals, especially the British journal Nature . . . .
The radiometric date of 2.61 m.y.a. for the KBS Tuff was established before skull 1470 was discovered. It was supported by faunal correlation, paleomagnetism, and fission-track dating. Up until that time, the fossils and the artifacts that had been found in association with the KBS Tuff were more or less compatible with that older date. It is entirely possible that if skull 1470 had never been found, the KBS Tuff would still be dated at 2.61 m.y.a. We would continue to be told that it was a “secure date” based on the precision of radiometric dating and the “independent” confirmation of other dating techniques that acted as controls. It was the shocking discovery of the [[then thought to be] morphologically modern skull 1470 [[which has subsequently been assigned to the Australopithecines on yet another re-interpretation], located well below the KBS Tuff, that precipitated the ten-year controversy.
What normally happens in a fossil discovery is that the fossils are discovered first. Then attempts are made to date the rock strata in which they are found. Under these conditions, a paleoanthropologist has a degree of control over the results. He is free to reject dates that do not fit the evolution scenario of the fossils. He is not even required to publish those “obviously anomalous” dates. The result is a very sanguine and misleading picture of the conformity of the human fossil record with the concept of human evolution. If, in many of these fossil sites the dates had been determined before the fossils had been discovered, evolutionists could not guarantee that the turbulent history of the dating of the KBS Tuff would not have been repeated many times.
The pigs won. In the ten-year controversy over the dating of one of the most important human fossils ever discovered, the pigs won. The pigs won over the elephants. The pigs won over K-Ar dating. The pigs won over 40Ar-39Ar dating. The pigs won over fission-track dating. They won over paleomagnetism. The pigs took it all. But in reality, it wasn’t the pigs that won. It was evolution that won. In the dating game, evolution always wins. [["The Dating Game." Appendix, Bones of Contention, Baker, 1992, pp. 247 - 266. Coloured emphasis added. (Cf. a critical review of the wider dating "game" here.)]
This case shows how not just stratigraphic but also radioactivity techniques such as Carbon dating, Potassium-Argon dating and Rubidium-Strontium isochron dating fall under these circularity challenges. For instance, Carbon-14 levels in the atmosphere are not in equilibrium. Similarly, as Creationists now often point out, presumably ancient samples of coal etc from far beyond the reasonable range of C-14 (~50,000 years) show detectable levels of the isotope when state of the art mass spectrometer techniques are used. Potassium-Argon dates of a large number of historically observed lava flows etc are grossly in error.
Going beyond this, a recent ICR summary critique by Brian Thomas raises the issue that many fossils are of in effect whole bodies or of evanescent phenomena like worm holes, which requires rapid burial in a matrix of cement-like rocks, that in effect form a mould before decay can act. Thus, he argues:
A related issue is the apparent discovery of soft tissues in dinosaur bones dated at c. 70 MYA, by paleobiologist Mary Schweitzer; which is an astonishing find:
Fig. G.6a: Preserved soft tissues, apparent cells and blood vessels in a T Rex fossil bone dated some 70 MYA. (Source: Smithsonian, fair use)
The discovery has left paleontologists scrambling to explain its features, as the concept that soft tissues and cells would be preserved more or less intact over 70 million years is deeply questionable given known decay processes. Some have said that the "fact" of preservation over 70 million years should lead us to revise our theories on decay of tissues and cells, others have tried to suggest that these are bacterial films and haematite nodules, or the like.
As this clip shows, some creationist critics -- understandably --have simply overprinted the following news feature video with telling comments:
(Follow-up: Cf. also a second newscast here [[& comment here and here], with the ICR audio commentary here, and the more recent CMI comments here.)
Going beyond this, a recent ICR summary critique by Brian Thomas raises the issue that many fossils are of in effect whole bodies or of evanescent phenomena like worm holes, which requires rapid burial in a matrix of cement-like rocks, that in effect form a mould before decay can act. Thus, he argues:
The fossil record is replete with evidence of soft parts, such as worm or clam bodies and burrows, as well as original soft tissues! Creatures with soft bodies or tissues would need to be fossilized within a shorter timeframe than it would take for them to decay. These fossils and other rock features have convinced mainstream geologists to reduce the amount of time involved when interpreting a single rock layer.It may of course be argued in rebuttal that while local/regional fossil-forming events were in some cases rapid or even catastrophic [[e.g. trapping large shoals of fish in the act of swimming], overall, there was time such that the typical rule of thumb "average" sedimentation rate of 1 - 2 cm/thousand years still has rough validity. But, this is an argument, not an observation, and it faces Milton's circularity challenges.
Many now recognize that each layer was borne of a high-energy watery event. During a geology field trip a number of years ago, one student asked the instructor, “If each of these layers formed rapidly, then where do the millions of years fit in?” The professor pointed to the contact line between an upper and lower layer and suggested that millions of years’ worth of sedimentary deposits must have accumulated…and then eroded away!
The earth’s surface today contains ruts, soil horizons, worm burrows, and plant roots. But the contacts between strata often look “razor sharp,”5 are very flat, and extend for many square miles. They show no evidence of long time periods between deposition . . . in one day in 1980 Mount St. Helens deposited hundreds of feet of “beautifully layered sediments,” demonstrating conclusively that brief but violent catastrophes can produce multiple flat layers.
If each fossil-filled layer formed rapidly, and if there is very little time between each layer, then rocks and fossils developed within a relatively short timeframe.
A related issue is the apparent discovery of soft tissues in dinosaur bones dated at c. 70 MYA, by paleobiologist Mary Schweitzer; which is an astonishing find:
Fig. G.6a: Preserved soft tissues, apparent cells and blood vessels in a T Rex fossil bone dated some 70 MYA. (Source: Smithsonian, fair use)
The discovery has left paleontologists scrambling to explain its features, as the concept that soft tissues and cells would be preserved more or less intact over 70 million years is deeply questionable given known decay processes. Some have said that the "fact" of preservation over 70 million years should lead us to revise our theories on decay of tissues and cells, others have tried to suggest that these are bacterial films and haematite nodules, or the like.
As this clip shows, some creationist critics -- understandably --have simply overprinted the following news feature video with telling comments:
(Follow-up: Cf. also a second newscast here [[& comment here and here], with the ICR audio commentary here, and the more recent CMI comments here.)
Even the vaunted isochrons – techniques that get the estimated age of minerals or rocks by plotting a line showing the way different concentrations of radioactive elements in different crystals have apparently decayed over time: they should move from an initially flat to a sloping line with the slope we observe being a measure of the inferred age -- can reflect more of rock-source mixing and leaching in/out than of aging over time.
Fig. G.6b: The Isochron dating model. (IOSE.) [[NB: If the minerals sampled do not come from an initially homogeneous molten rock, so that (i) the inferred zero-age Parent to Independent isotope ratios [[P/I] of different sampled crystals in (or parts of) a rock are not on the same initial flat line, or (ii) if there is mixing of different source-zones of different composition, or (iii) if there is leaching in or out of Parent [[P], Independent [[I] or radioactive decay- created Daughter [[D] isotope atoms across time, the resulting “age” -- estimated from the current time slope of D/I vs P/I -- may be unreliable. This seems to happen even with reasonably low scatter current age line plots.]
For instance, a critic recently discussed the “Cardenas” basalts deep in the layers of the Grand Canyon, and recent flows that cascaded down the current walls of the same canyon after it formed:
Fig. G.7: A geological diagram of the Grand Canyon's Paleozoic “Cardenas” basalts and lava cascades. (Source: Snelling of ICR, under fair use; cf. photograph here. A video discussion in the creationist frame similar to Snelling is here.)
Rb-Sr isochron dating (10 samples) of the former gives 1,103±66 MY. K-Ar dates for 15 samples range from 577±12 to 1,013±37 MY. A K-Ar isochron (14 samples) gives 516±30 MY. A Sm-Nd isochron (8 samples) weighs in at 1,588±170 MY. But most telling is the Rb-Sr isochron dating on a cascade lava flow that may well have been observed by Native Americans: 1,143±220 MY. Another critic notes how australite tektites in Port Campbell, thrown out by a meteoritic impact that on evidence of Aborigine settlements and C-14 dates perhaps to 5,700 – 7,300 years, but gives a K-Ar date of 610,00 Y, and a fission-track date of 800,000 Y.
Nor is it just the “critics.” Writing in a leading Geology journal, Davidson, Charlier, Hora, and Perlroth recently observed:
The determination of accurate and precise isochron ages for igneous rocks requires that the initial isotope ratios of the analyzed minerals are identical at the time of eruption or emplacement [[i.e. they must come from a common initial “molten rock point,” as shown]. Studies of young volcanic rocks at the mineral scale have shown this assumption to be invalid in many instances. Variations in initial isotope ratios can result in erroneous or imprecise ages. Nevertheless, it is possible for initial isotope ratio variation to be obscured in a statistically acceptable isochron. Independent age determinations [[but Milton's four “reasoning in a circle” challenges undercut such “independence”] and critical appraisal of petrography are needed to evaluate isotope data. If initial isotope ratio variability can be demonstrated, however, it can be used to constrain petrogenetic pathways [[i.e. It is used to argue for models of rock origin]. [[Abstract, “Mineral isochrons and isotopic fingerprinting: Pitfalls and promises,” Geology, Vol. 33, No. 1, Jan. 2005, pp. 29–32. (Parentheses and emphases added.) ]
As a result, while prehistoric dates are often quite confidently presented (especially to the general public) as though they are practically indisputable, they should be viewed with a modicum of caution and reserve in light of obvious limitations and weaknesses. Such concerns should therefore be borne in mind as we turn to the proposed processes of chemical and biological evolution that are commonly held to have happened on earth across the past 3.8 By or so.