Computational Physics - Department of Physics

(Axel Boer) #1

11.2 Probability Distribution Functions 355


By the linearity of the expectation value, it can be shown [66] that


Cov(U,V) =∑
i,j

aibjCov(Xi,Yj)

Now, since the variance is justVar(Xi) =Cov(Xi,Xi), we get the variance of the linear combi-
nationU=∑iaiXi
Var(U) =∑
i,j


aiajCov(Xi,Xj) (11.9)

And in the special case when the stochastic variables are uncorrelated, the off-diagonal ele-
ments of the covariance are as we know zero, resulting in


Var(U) =∑
i

a^2 iCov(Xi,Xi) =∑
i

a^2 iVar(Xi)

Var(∑
i

aiXi) =∑
i

a^2 iVar(Xi)

which will become very useful in our study of the error in the mean value of a set of measure-
ments.
Now that we have constructed an idealized mathematical framework, let us try to apply
it to empirical observations. Examples of relevant physical phenomena may be spontaneous
decays of nuclei, or a purely mathematical set of numbers produced by some deterministic
mechanism. It is the latter we will deal with, using so-called pseudo-random number gener-
ators. In general our observations will contain only a limited set of observables. We remind
the reader that astochastic processis a process that produces sequentially a chain of values


{x 1 ,x 2 ,...xk,...}.

We will call these values ourmeasurementsand the entire set as our measuredsample. The
action of measuring all the elements of a sample we will call astochasticexperiment(since,
operationally, they are often associated with results of empirical observation of some physi-
cal or mathematical phenomena; precisely an experiment). We assume that these values are
distributed according to some PDFpX(x), whereXis just the formal symbol for the stochastic
variable whose PDF ispX(x). Instead of trying to determine the full distributionpwe are often
only interested in finding the few lowest moments, like the meanμXand the varianceσX.
In practical situations however, a sample is always of finitesize. Let that size ben. The
expectation value of a sampleα, thesample mean, is then defined as follows


〈xα〉≡

1

n

n

k= 1

xα,k.

Thesample varianceis:


Var(x)≡

1

n

n

k= 1

(xα,k−〈xα〉)^2 ,

with its square root being thestandard deviation of the sample.
You can think of the above observables as a set of quantities which define a given experi-
ment. This experiment is then repeated several times, saymtimes. The total average is then


〈Xm〉=

1

m

m

α= 1

xα=

1

mnα∑,k
xα,k, (11.10)

where the last sums end atmandn. The total variance is

Free download pdf