Pattern Recognition and Machine Learning
2.5. Nonparametric Methods 121 Figure 2.24 An illustration of the histogram approach to density estimation, in which a data set ...
122 2. PROBABILITY DISTRIBUTIONS this neighbourhood property was defined by the bins, and there is a natural ‘smooth- ing’ param ...
2.5. Nonparametric Methods 123 We can exploit the result (2.246) in two different ways. Either we can fixKand determine the valu ...
124 2. PROBABILITY DISTRIBUTIONS Figure 2.25 Illustration of the kernel density model (2.250) applied to the same data set used ...
2.5. Nonparametric Methods 125 Figure 2.26 Illustration ofK-nearest-neighbour den- sity estimation using the same data set as in ...
126 2. PROBABILITY DISTRIBUTIONS Figure 2.27 (a) In theK-nearest- neighbour classifier, a new point, shown by the black diamond, ...
Exercises 127 An interesting property of the nearest-neighbour (K=1) classifier is that, in the limitN→∞, the error rate is neve ...
128 2. PROBABILITY DISTRIBUTIONS Use this result to prove by induction the following result (1 +x)N= ∑N m=0 ( N m ) xm (2.263) w ...
Exercises 129 2.7 ( ) Consider a binomial random variablexgiven by (2.9), with prior distribution forμgiven by the beta distribu ...
130 2. PROBABILITY DISTRIBUTIONS 2.11 ( ) www By expressing the expectation oflnμjunder the Dirichlet distribution (2.38) as a d ...
Exercises 131 2.16 ( ) www Consider two random variablesx 1 andx 2 having Gaussian distri- butions with meansμ 1 ,μ 2 and precis ...
132 2. PROBABILITY DISTRIBUTIONS Mahalanobis distance∆is given by VD|Σ|^1 /^2 ∆D (2.286) whereVDis the volume of the unit sphere ...
Exercises 133 2.29 ( ) Using the partitioned matrix inversion formula (2.76), show that the inverse of the precision matrix (2.1 ...
134 2. PROBABILITY DISTRIBUTIONS distribution, by starting with the maximum likelihood expression σML^2 = 1 N ∑N n=1 (xn−μ)^2. ( ...
Exercises 135 variable drawn from the distribution (2.293). Show that the log likelihood function overwandσ^2 , for an observed ...
136 2. PROBABILITY DISTRIBUTIONS where denotes the real part, prove (2.178). Finally, by usingsin(A−B)= exp{i(A−B)}, where denot ...
3 Linear Models for Regression The focus so far in this book has been on unsupervised learning, including topics such as density ...
138 3. LINEAR MODELS FOR REGRESSION Given a training data set comprisingNobservations{xn}, wheren=1,...,N, together with corresp ...
3.1. Linear Basis Function Models 139 or feature extraction, to the original data variables. If the original variables com- pris ...
140 3. LINEAR MODELS FOR REGRESSION −1 0 1 −1 −0.5 0 0.5 1 −1 0 1 0 0.25 0.5 0.75 1 −1 0 1 0 0.25 0.5 0.75 1 Figure 3.1 Examples ...
«
3
4
5
6
7
8
9
10
11
12
»
Free download pdf