untitled

(Brent) #1
4 The effect of such local and extraneous influences on the results of the experi-
ment is countered by replication. Suppose six sites are designated, three of which
are treated by feeding and the other three left as controls. The category of the site is
determined randomly. Before and after measurements of density are made at the same
time of the year in all six areas. The biological question: does supplementary feed-
ing affect density? is translated into a form reflecting the experimental design: is the
difference in quail density between treatments (feeding versus not feeding) greater
than the difference between sites (replicates) within treatments?
This is an appropriate experimental design in that the outcome provides an unam-
biguous test of the hypothesis. Its efficiency and precision could be increased in
various ways but its logic is right.
The form of an experimental design is dictated by logic rather than by the special
requirements of the arithmetic subsequently performed on the data. This is an
immensely important point. If the manager has no intention of applying powerful
methods of analysis to the data, that in no way sanctions shortcuts in the basic experi-
mental design.
Another common fallacy is the belief that although a logically designed experiment
is necessary for publication in a scientific journal the manager need not bother with
all that rigmarole if the only aim is to find out what is going on. The manager might
then simply run an “empirical test” like the second or third example given above
without realizing that the measurements do not reveal what is going on.

Suppose we wish to determine whether grazing by deer affects the density of a species
of grass. The experimental treatment is grazing by a fixed density of deer and the
experimental control is an absence of such grazing. We cannot simply apply the two
treatments each to a single area because no two areas are precisely the same. We
would not know whether the measured difference in plant density was attributable
to the difference in treatment or whether it reflected some intrinsic difference
between the two areas. There will always be a measurable difference between areas
in the density of any species whatever one does, or does not do, to those areas.
We can postulate that a difference between treated areas is caused by the disparate
treatments applied to them only when the difference between the treatments is
appreciably greater than the difference within treatments. To determine the scale of
variation within the “population” of treatments we must look at a sample of areas
that have received the same treatment. The minimum size of a sample is two. Thus,
we must designate at least two areas as grazing treatments and two as controls. A
sample of three is better.
The density of a plant species is usually measured within small quadrats scattered
over a treatment area. Fifty might be measured in each. Those 50 quadrats are not
replicates. They are subsamples of a single treatment and their invalid use as “repli-
cates” is called pseudo-replication. Sampling within a treatment is not treatment repli-
cation. Data from such subsamples could be fed into an analysis of variance, which
would then provide what might appear to be a rigorous test of the hypothesis, but
that is an illusion. The arithmetic procedures have been fulfilled but the logic is not
satisfied. The result is actually a test of whether the combination of the treatment
and the intrinsic characteristics of a single area differ from another treatment
combined with the intrinsic characteristics of another area. We say that area and
treatment are confounded. Their individual effects cannot be disentangled. No

EXPERIMENTAL MANAGEMENT 273

16.4.2Why replicate
the experiment?

WECC16 18/08/2005 14:47 Page 273

Free download pdf