functional properties; there are many other organisms than this Mycoplasma that would
be sufficient to start life; and there are scenarios for the start of life that do not involve a
full-blown independent DNA-based organism coming about at random (see, e.g.,
Gesteland, Cech, and Atkins 2000). Thus, the actual probability is higher than one in
105515. However, the number gives one some idea of how difficult the life-production task
is. We still do not have a reasonably probable scientific explanation for the origin of life,
and so the possibility that a Paley-type argument will succeed is still open.
Second, in a surprising development, there are scientists and mathematicians, most
notably Michael Behe (1996) and William Dembski (1999a, 1999b), who question
whether Darwinian evolution can account for all biological mechanisms. Thus, Behe
argues that whatever the plausibility of Darwinism for explaining macroscopic features of
organisms, on the microscopic level we find biochemical complexity of such a degree
that it could not be expected to come about through natural selection. The problem is that
there are irreducible complexities: systems that only benefit the organism once all the
parts are properly installed. A system having irreducible complexity cannot be expected
to evolve gradually step by step through natural selection. Behe has argued that the cilia
of bacteria, our immune system, and the blood-clotting system exhibit irreducible
complexity. Findings like this have been challenged and evolutionary mechanisms for at
least some of these systems have been proposed. At the moment, this dispute is not
resolvable and we must await future scientific breakthroughs. There is, however, at least
some chance that the Paley argument in almost its classical form may yet come back.
Instead of focusing on biological detail, many modern teleological arguers prefer to point
to the apparent fact that the laws of nature, and the various constants in them, are
precisely such as to allow for life (see, e.g., Leslie 1988). For instance, the universal law
of gravitation states that the force between two masses is equal to G times the product of
the masses divided by the square of the distance, where G is the gravitational constant
equal approximately to 6.672×10−^11 in the metric system. But although this constant
could, prima facie, have any other real number as its value, only a narrow range of values
of that constant would allow for, say, the formation of apparent prerequisites for life,
such as stars. Likewise, it is claimed that were the laws of nature themselves somewhat
different, life could not form.
Of course, it could be that the progress of science will unify all the laws of nature in a
way that exactly predicts the values of the constants, and in a way that will make it seem
“natural” that the laws and constants are as they are. However, this has not been done yet,
and we can only go by what we have right now. It is claimed that, right now, our only
good putative explanation of the laws and constants is design.
Gilbert Fulmer (2001) has replied that the discussions of the fine-tuning of the constants
in the laws of nature all presuppose that we are working in a range of values similar to
those that actually obtain, or at least that we are working with laws of nature generally
like ours. But how do we know that once we look at the totality of all possible laws of
nature and constants therein, we might not find that the majority of these are compatible
with life, albeit perhaps life of a significantly different sort than we find here? In reply to
this kind of an argument, Leslie (1988) has used the analogy of a wasp on a wall. Imagine
we see that a wasp on a wall was hit by a dart. Around the wasp, there is a large clear
area with no wasps. We are justified in inferring that someone aimed the dart at the wasp
even if there are lots of wasps further away on the wall. To infer design, one does not
nextflipdebug5
(nextflipdebug5)
#1