Pattern Recognition and Machine Learning

(Jeff_L) #1
646 13. SEQUENTIAL DATA

straightforward since, again using Bayes’ theorem

p(zn+1|Xn)=


p(zn+1|zn,Xn)p(zn|Xn)dzn

=


p(zn+1|zn)p(zn|Xn)dzn

=


p(zn+1|zn)p(zn|xn,Xn− 1 )dzn

=


p(zn+1|zn)p(xn|zn)p(zn|Xn− 1 )dzn

p(xn|zn)p(zn|Xn− 1 )dzn

=


l

wn(l)p(zn+1|z(nl)) (13.119)

where we have made use of the conditional independence properties

p(zn+1|zn,Xn)=p(zn+1|zn) (13.120)
p(xn|zn,Xn− 1 )=p(xn|zn) (13.121)

which follow from the application of the d-separation criterion to the graph in Fig-
ure 13.5. The distribution given by (13.119) is a mixture distribution, and samples
can be drawn by choosing a componentlwith probability given by the mixing coef-
ficientsw(l)and then drawing a sample from the corresponding component.
In summary, we can view each step of the particle filter algorithm as comprising
two stages. At time stepn, we have a sample representation of the posterior dis-
tributionp(zn|Xn)expressed as samples{z(nl)}with corresponding weights{wn(l)}.
This can be viewed as a mixture representation of the form (13.119). To obtain the
corresponding representation for the next time step, we first drawLsamples from
the mixture distribution (13.119), and then for each sample we use the new obser-
vationxn+1to evaluate the corresponding weightswn(l+1) ∝p(xn+1|z(nl+1) ). This is
illustrated, for the case of a single variablez, in Figure 13.23.
The particle filtering, or sequential Monte Carlo, approach has appeared in the
literature under various names including thebootstrap filter(Gordonet al., 1993),
survival of the fittest(Kanazawaet al., 1995), and thecondensationalgorithm (Isard
and Blake, 1998).

Exercises


13.1 ( ) www Use the technique of d-separation, discussed in Section 8.2, to verify
that the Markov model shown in Figure 13.3 havingNnodes in total satisfies the
conditional independence properties (13.3) forn=2,...,N. Similarly, show that
a model described by the graph in Figure 13.4 in which there areNnodes in total
Free download pdf