Mathematical Methods for Physics and Engineering : A Comprehensive Guide

(lu) #1

STATISTICS


By comparing this result with that given towards the end of subsection 31.5.4, we see that,
as we might expect, the Bayesian and classical confidence intervals differ somewhat.


The above discussion is generalised straightforwardly to the estimation of

several parametersa 1 ,a 2 ,...,aMsimultaneously. The elements of the inverse of


the covariance matrix of the ML estimators can be approximated by


(V−^1 )ij=−

∂^2 lnL
∂ai∂aj





a=aˆ

. (31.86)


From (31.36), we see that (at least for unbiased estimators) the expectation value


of (31.86) is equal to the elementFijof the Fisher matrix.


The construction of a multi-dimensionalBayesian confidence regionis also

straightforward. For a given confidence level 1−α(say), it is most common


to construct the confidence region as theM-dimensional regionRina-space,


bounded by the ‘surface’L(x;a) = constant, for which


R

L(x;a)dMa=1−α,

where it is assumed thatL(x;a) is normalised to unit volume. Moreover, we


see from (31.83) that (assuming a uniform prior probability) we may obtain the


marginalposterior distribution for any parameteraisimply by integrating the


likelihood functionL(x;a) over the other parameters:


P(ai|x,H)=


···


L(x;a)da 1 ···dai− 1 dai+1···daM.

Here the integral extends over all possible values of the parameters, and again


is it assumed that the likelihood function is normalised in such a way that∫


L(x;a)dMa= 1. This marginal distribution can then be used as above to

determine Bayesian confidence intervals on eachaiseparately.


Ten independent sample valuesxi,i=1, 2 ,..., 10 , are drawn at random from a Gaussian
distribution with unknown meanμand standard deviationσ. The sample values are as
follows (to two decimal places):

2 .22 2.56 1.07 0.24 0.18 0.95 0. 73 − 0 .79 2.09 1. 81

Find the Bayesian95%central confidence intervals onμandσseparately.

The likelihood function in this case is


L(x;μ, σ)=(2πσ^2 )−N/^2 exp

[



1


2 σ^2

∑N


i=1

(xi−μ)^2

]


. (31.87)


Assuming uniform priors onμandσ(over their natural ranges of−∞ → ∞and 0→∞
respectively), we may identify this likelihood function with the posterior probability, as in
(31.83). Thus, the marginal posterior distribution onμis given by


P(μ|x,H)∝

∫∞


0

1


σN

exp

[



1


2 σ^2

∑N


i=1

(xi−μ)^2

]


dσ.
Free download pdf