Pattern Recognition and Machine Learning

(Jeff_L) #1
13.3. Linear Dynamical Systems 645

13.3.4 Particle filters


For dynamical systems which do not have a linear-Gaussian, for example, if
Chapter 11 they use a non-Gaussian emission density, we can turn to sampling methods in order
to find a tractable inference algorithm. In particular, we can apply the sampling-
importance-resampling formalism of Section 11.1.5 to obtain a sequential Monte
Carlo algorithm known as the particle filter.
Consider the class of distributions represented by the graphical model in Fig-
ure 13.5, and suppose we are given the observed valuesXn=(x 1 ,...,xn)and
we wish to drawLsamples from the posterior distributionp(zn|Xn). Using Bayes’
theorem, we have


E[f(zn)] =


f(zn)p(zn|Xn)dzn

=


f(zn)p(zn|xn,Xn− 1 )dzn

=


f(zn)p(xn|zn)p(zn|Xn− 1 )dzn

p(xn|zn)p(zn|Xn− 1 )dzn



∑L

l=1

w(nl)f(z(nl)) (13.117)

where{z
(l)
n}is a set of samples drawn fromp(zn|Xn− 1 )and we have made use of
the conditional independence propertyp(xn|zn,Xn− 1 )=p(xn|zn), which follows
from the graph in Figure 13.5. The sampling weights{w(nl)}are defined by

wn(l)=

p(xn|z(nl))
∑L
m=1p(xn|z

(m)
n )

(13.118)

where the same samples are used in the numerator as in the denominator. Thus the
posterior distributionp(zn|xn)is represented by the set of samples{z(nl)}together
with the corresponding weights{wn(l)}. Note that these weights satisfy 0 w(nl) 1
and


lw

(l)
n =1.
Because we wish to find a sequential sampling scheme, we shall suppose that
a set of samples and weights have been obtained at time stepn, and that we have
subsequently observed the value ofxn+1, and we wish to find the weights and sam-
ples at time stepn+1. We first sample from the distributionp(zn+1|Xn). This is
Free download pdf