Data Mining: Practical Machine Learning Tools and Techniques, Second Edition
classes—that is, they are errors made by the rule. Then choose the new term to maximize the ratio p/t. An example will help. For ...
4.4 COVERING ALGORITHMS: CONSTRUCTING RULES 109 Considering the possibilities for the unknown term? yields the seven choices: ag ...
selects three. In the event of a tie, we choose the rule with the greater coverage, giving the final rule: If astigmatism = yes ...
4.4 COVERING ALGORITHMS: CONSTRUCTING RULES 111 that it assigns cases to the class in question that actually do not have that cl ...
be found that covers this instance, in which case the class in question is pre- dicted, or no such rule is found, in which case ...
4.5 MINING ASSOCIATION RULES 113 accuracy(the same number expressed as a proportion of the number of instances to which the rule ...
114 CHAPTER 4| ALGORITHMS: THE BASIC METHODS Table 4.10 Item sets for the weather data with coverage 2 or greater. One-item sets ...
4.5 MINING ASSOCIATION RULES 115 If humidity =normal and windy =false then play =yes 4/4 If humidity =normal and play = yes then ...
116 CHAPTER 4| ALGORITHMS: THE BASIC METHODS Table 4.11 Association rules for the weather data. Association rule Coverage Accura ...
4.5 MINING ASSOCIATION RULES 117 which has coverage 2. Three subsets of this item set also have coverage 2: temperature =cool, w ...
remaining three-item set is indeed present in the hash table. Thus in this example there is only one candidate four-item set, (A ...
4.6 LINEAR MODELS 119 through the dataset for each different size of item set. Sometimes the dataset is too large to read in to ...
where xis the class;a 1 ,a 2 ,...,akare the attribute values; and w 0 ,w 1 ,...,wkare weights. The weights are calculated from t ...
4.6 LINEAR MODELS 121 However, linear models serve well as building blocks for more complex learn- ing methods. Linear classific ...
with weights w.Figure 4.9(b) shows an example of this function in one dimen- sion, with two weights w 0 =0.5 and w 1 =1. Just as ...
4.6 LINEAR MODELS 123 where the x(i)are either zero or one. The weights wineed to be chosen to maximize the log-likelihood. Ther ...
Because this is a linear equality in the attribute values, the boundary is a linear plane, or hyperplane,in instance space. It i ...
4.6 LINEAR MODELS 125 means that we don’t have to include an additional constant element in the sum. If the sum is greater than ...
To see why this works, consider the situation after an instance apertaining to the first class has been added: This means the ou ...
4.6 LINEAR MODELS 127 While some instances are misclassified for every instance a classify a using the current weights if the pr ...
«
3
4
5
6
7
8
9
10
11
12
»
Free download pdf