354 11 Outline of the Monte Carlo Strategy
p(x) =
λx
x!
e−λ x= 0 , 1 ,...,;λ> 0.
In this case both the mean value and the variance are easier tocalculate,
μ=
∞
∑
x= 0
x
λx
x!
e−λ=λe−λ
∞
∑
x= 1
λx−^1
(x− 1 )!
=λ,
and the variance isσ^2 =λ. An example of applications of the Poisson distribution could be the
counting of the number ofα-particles emitted from a radioactive source in a given timeinter-
val. In the limit ofn→∞and for small probabilitiesy, the binomial distribution approaches
the Poisson distribution. Settingλ=ny, withythe probability for an event in the binomial
distribution we can show that
nlim→∞
(
n
x
)
yx( 1 −y)n−xe−λ=
∞
∑
x= 1
λx
x!e
−λ,
see for example Refs. [63,64] for a proof.
11.2.1Multivariable Expectation Values
An important quantity is the so called covariance, a variantof the variance. Consider the
set{Xi}ofnstochastic variables (not necessarily uncorrelated) withthe multivariate PDF
P(x 1 ,...,xn). Thecovarianceof two of the stochastic variables,XiandXj, is defined as follows
Cov(Xi,Xj)≡
〈
(xi−〈xi〉)(xj−〈xj〉)
〉
=
∫
···
∫
(xi−〈xi〉)(xj−〈xj〉)P(x 1 ,...,xn)dx 1 ...dxn (11.8)
with
〈xi〉=
∫
···
∫
xiP(x 1 ,...,xn)dx 1 ...dxn
If we consider the above covariance as a matrixCi j=Cov(Xi,Xj), then the diagonal elements
are just the familiar variances,Cii=Cov(Xi,Xi) =Var(Xi). It turns out that all the off-diagonal
elements are zero if the stochastic variables are uncorrelated. This is easy to show, keeping in
mind the linearity of the expectation value. Consider the stochastic variablesXiandXj, (i 6 =j)
Cov(Xi,Xj) =
〈
(xi−〈xi〉)(xj−〈xj〉)
〉
=〈xixj−xi〈xj〉−〈xi〉xj+〈xi〉〈xj〉〉
=〈xixj〉−〈xi〈xj〉〉−〈〈xi〉xj〉+〈〈xi〉〈xj〉〉
=〈xixj〉−〈xi〉〈xj〉−〈xi〉〈xj〉+〈xi〉〈xj〉
=〈xixj〉−〈xi〉〈xj〉
IfXiandXjare independent, we get〈xixj〉=〈xi〉〈xj〉, resulting inCov(Xi,Xj) = 0 (i 6 =j).
Also useful for us is the covariance of linear combinations of stochastic variables. Let{Xi}
and{Yi}be two sets of stochastic variables. Let also{ai}and{bi}be two sets of scalars.
Consider the linear combination
U=∑
i
aiXi V=∑
j
bjYj