622 Chapter 21Probability and statistics
This is the example shown in Figure 21.10. The error bars correspond toσ 1 = 1 0.25so
that the linear least-squares fit in this case is
y 1 = 1 mx 1 + 1 c, where m 1 = 1 −0.1691 1 ± 1 0.0158, c 1 = 1 3.2214 1 ± 1 0.1704
0 Exercises 26, 27
Chi-square fitting
When the (known) errors in the values of yare not all the same, withσ
i
fory
i
, the
simple least-squares method is replaced by the more general weighted least-squares
method:
(21.57)
Each contribution to the sum is weighted by the factor 12 σ
i
2
, and this has the effect
of giving greater weight to the more precise values of y(for the smaller values ofσ
i
).
This situation arises, for example, when the precision with which the measurements
can be made varies over the range of yvalues, or when the data points come from
measurements on several different instruments. This fitting method reduces to
simple least squares when all theσ
i
are equal.
The minimization ofχ
2
for a straight-line fit gives the same equations as before,
(21.54) for mand cand (21.55) forσ
m
andσ
c
, except that the averages are replaced
by weighted averages, for example
(21.58)
and, in equations (21.55),
(21.59)
Fits to other types of function are obtained in the same way, but lead to more
complicated formulas. The method can also be generalized for more than two variables.
The justification of the use of equation (21.57) comes from a consideration of the
normal distribution (21.39) for random errors. If the errors iny
i
are random and if
f(x
i
)is the (unknown) truevalue of ywhenx 1 = 1 x
i
then
(21.60)
ρε
σ
εσ
()
i
i
e
ii
=
−
1
2
22
2
π
σ
σ
2
1
2
1
1
N
i
N
i
is replaced by
=
∑
y
N
yy
y
i
N
i
i
N
i
i
==
==
∑∑
1
11
2
is replaced by
σ
ii
N
i
=
∑
1
2
1
σ
χ
σ
2
1
2
=
−;
=
=
∑
i
N
ii
i
yfx()a
minimum