Data Mining: Practical Machine Learning Tools and Techniques, Second Edition

(Brent) #1

6.2 CLASSIFICATION RULES 203


performs better than the original rule. This pruning process repeats until the
rule cannot be improved by deleting any further tests. The whole procedure is
repeated for each class, obtaining one best rule for each class, and the overall
best rule is established by evaluating the rules on the pruning set. This rule is
then added to the rule set, the instances it covers removed from the training
data—from both growing and pruning sets—and the process is repeated.
Why not do the pruning as we build the rule up, rather than building up the
whole thing and then throwing parts away? That is, why not preprune rather
than postprune? Just as when pruning decision trees it is often best to grow the
tree to its maximum size and then prune back, so with rules it is often best to
make a perfect rule and then prune it. Who knows? Adding that last term may
make a really good rule, a situation that we might never have noticed had we
adopted an aggressive prepruning strategy.
It is essential that the growing and pruning sets are separate, because it is mis-
leading to evaluate a rule on the very data used to form it: that would lead to
serious errors by preferring rules that were overfitted. Usually the training set is
split so that two-thirds of instances are used for growing and one-third for
pruning. A disadvantage, of course, is that learning occurs from instances in the
growing set only, and so the algorithm might miss important rules because some
key instances had been assigned to the pruning set. Moreover, the wrong rule
might be preferred because the pruning set contains only one-third of the data
and may not be completely representative. These effects can be ameliorated by
resplitting the training data into growing and pruning sets at each cycle of the
algorithm, that is, after each rule is finally chosen.
The idea of using a separate pruning set for pruning—which is applicable to
decision trees as well as rule sets—is called reduced-error pruning.The variant
described previously prunes a rule immediately after it has been grown and is
called incremental reduced-error pruning.Another possibility is to build a full,
unpruned rule set first, pruning it afterwards by discarding individual tests.
However, this method is much slower.
Of course, there are many different ways to assess the worth of a rule based
on the pruning set. A simple measure is to consider how well the rule would do
at discriminating the predicted class from other classes if it were the only rule
in the theory, operating under the closed world assumption. If it gets pinstances
right out of the tinstances that it covers, and there are Pinstances of this class
out of a total Tof instances altogether, then it gets ppositive instances right.
The instances that it does not cover include N-nnegative ones, where n =t-p
is the number of negative instances that the rule covers and N =T-Pis the
total number of negative instances. Thus the rule has an overall success ratio of


[]pNnT+-( ) ,
Free download pdf