We have seen in Section 25.5 that the standard
error of the mean is 1=
ffiffiffiffi
N
p
times the sample stan-
dard deviation. The variance of the difference
XBXAis the sum of the variances ofXBand of
XA. Therefore, from Table 25.4 we obtain:
XBXA¼ 59 : 4 54 : 2 ¼ 5 : 2
VARðXBXAÞ¼
1 : 812
10
þ
2 : 992
10
and thus,
SEðXBXAÞ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1 : 812 þ 2 : 992
p
ffiffiffiffiffi
10
p
¼ 1 : 105
Therefore,T¼ 5 : 2 = 1 : 105 ¼ 4 :7.
Statistical theory teaches that under the assump-
tion thatthepopulationmeans ofthe twogroupsare
the same (i.e. if H 0 is true), the distribution of
variableTdepends only on the sample size but
not on the value of the common mean or on the
measurements population variance and thus can be
tabulated independently of the particulars of any
given experiment. This is the so-called Student’s
t-distribution. Using tables of thet-distribution, we
can calculate the probability that a variableTcal-
culated as above assumes avaluegreater or equal to
4.7, the value obtained in our example,given that
H 0 is true. This probability is<0.0001. Thus, if H 0
is true, the result obtained in our experiment is
extremely unlikely, although not impossible. We
are forced to choose between two possible expla-
nations to this. One is that a very unlikely event
occurred. The second is that the result of our
experiment is not a fluke, rather, the difference
BAis a positive number, sufficiently large
to make the probability of this outcome a likely
event. We elect the latter explanation and reject H 0
in favor of the alternative hypothesis H 1.
The steps we have taken in the above example
are quite generic. They could be summarized as
follows:
Step 1: Describe a statistical model and identify
the variable measuring the effect of
interest.
Step 2: Define the statistical hypothesis to be
tested.
Step 3: Define the test statistic to be used for testing
H 0. This test statistics is always the signal-
to-noise ratio.
Step 4: Perform the experiment and collect the
data.
Step 5: Calculate the value of the test statistic
based on the data.
Step 6: Calculate the probability under the
assumption that H 0 is true, that the test
statistic will assume a value equal or
greater than the value obtained in the
experiment. If this probability is small
enough for you to decide that the value
obtained in the experiment is highly unli-
kely, declare the test as statistically signifi-
cant and reject H 0.
Step 6 reflects the logic driving statistical infer-
ence. It is based on the expectation that if an event
occurs in an experiment it is not an unlikely event.
The probability that the test statistics will assume
a value as large or larger than the value obtained in
the experiment is called thesignificance probability
of the test, or thep-value. In our example, the
p-value was less than 0.0001. Most people would
consider such a value extremely unlikely and
declare the test statistically significant. Thequestion
what values should be considered small enough to
declare statistical significance is a matter of judg-
ment. Over the years of statistical practice, the
number 0.05 became the standard cutoff point.
Anyp-value smaller than 0.05 is considered signifi-
cant, and anyp-value greater than 0.05 is considered
not significant. It should be emphasized, though,
that this is an arbitrary value and that there is no
real difference between ap-value of 0.049 and a
p-value of 0.051; although, if one follows the cutoff
rule of 0.05 to the letter, one will declare statistical
significance in the former but not in the latter case.
Thisis,ofcourse,absurd.Thesetwop-valuesshould
not lead the researcher to conclusions with such
diametrically opposed consequences. A choice of
any other cutoff value will lead to a similar situation
if followed strictly. A good measure of common
sense is always useful. There is, of course, no reason
328 CH25 STATISTICAL PRINCIPLES AND APPLICATION IN BIOPHARMACEUTICAL RESEARCH