Pattern Recognition and Machine Learning
1.5. Decision Theory 41 Figure 1.25 An example of a loss matrix with ele- mentsLkjfor the cancer treatment problem. The rows cor ...
42 1. INTRODUCTION Figure 1.26 Illustration of the reject option. Inputs xsuch that the larger of the two poste- rior probabilit ...
1.5. Decision Theory 43 subsequentdecisionstage in which we use these posterior probabilities to make op- timal class assignment ...
44 1. INTRODUCTION p(x|C 1 ) p(x|C 2 ) x class densities 0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 x p(C 1 |x) p(C 2 |x) 0 0.2 0.4 0.6 0.8 ...
1.5. Decision Theory 45 application). If we know the posterior probabilities, we can trivially revise the minimum risk decision ...
46 1. INTRODUCTION independent, so that p(xI,xB|Ck)=p(xI|Ck)p(xB|Ck). (1.84) Section 8.2 This is an example ofconditional indepe ...
1.5. Decision Theory 47 Figure 1.28 The regression functiony(x), which minimizes the expected squared loss, is given by the mean ...
48 1. INTRODUCTION (b)First solve the inference problem of determining the conditional densityp(t|x), and then subsequently marg ...
1.6. Information Theory 49 y−t |y − qt| q=0. 3 −2 −1 0 1 2 0 1 2 y−t |y − qt| q=1 −2 −1 0 1 2 0 1 2 y−t |y − qt| q=2 −2 −1 0 1 2 ...
50 1. INTRODUCTION tion (1.92) and the corresponding entropy (1.93). We now show that these definitions indeed possess useful pr ...
1.6. Information Theory 51 the number of different ways of allocating the objects to the bins. There areN ways to choose the fir ...
52 1. INTRODUCTION probabilities H = 1.77 0 0.25 0.5 probabilities H = 3.09 0 0.25 0.5 Figure 1.30 Histograms of two probability ...
1.6. Information Theory 53 ∆→ 0. The first term on the right-hand side of (1.102) will approach the integral of p(x)lnp(x)in thi ...
54 1. INTRODUCTION three constraints ∫∞ −∞ p(x)dx =1 (1.105) ∫∞ −∞ xp(x)dx = μ (1.106) ∫∞ −∞ (x−μ)^2 p(x)dx = σ^2. (1.107) Appen ...
1.6. Information Theory 55 which is called theconditional entropyofygivenx. It is easily seen, using the Exercise 1.37 product r ...
56 1. INTRODUCTION Figure 1.31 A convex functionf(x)is one for which ev- ery chord (shown in blue) lies on or above the function ...
1.6. Information Theory 57 where we have used the fact that−lnxis a convex function, together with the nor- malization condition ...
58 1. INTRODUCTION Thus we can view the mutual information as the reduction in the uncertainty aboutx by virtue of being told th ...
Exercises 59 1.6 () Show that if two variablesxandyare independent, then their covariance is zero. 1.7 () www In this exercis ...
60 1. INTRODUCTION 1.12 () www Using the results (1.49) and (1.50), show that E[xnxm]=μ^2 +Inmσ^2 (1.130) wherexnandxmdenote d ...
«
1
2
3
4
5
6
7
8
9
10
»
Free download pdf