in the construction of a failure classification model. In many cases, we can present an
in-depth discussion of the models including individual variable weights. In others,
we present the models in more general terms due to the lack of precise documenta-
tion in the original article. In general, to make this survey useful to researchers and
practitioners alike, we attempt to summarize the contents of the models under the fol-
lowing headings:
- Modeling techniques used. While multiple discriminant analysis (MDA) contin-
ues to be the most popular technique, researchers have tried other techniques
such as multi-nomial logit analysis, probit analysis, recursive partitioning (deci-
sion tree analysis), Bayesian discriminant analysis, survival analysis, and neural
networks. For a variety of reasons, MDA appears to be a de factostandard for
comparison of distress prediction models. Where the authors have used a tech-
nique other than MDA, they usually have compared its results with those from
MDA. It is interesting to note that MDA results continue to compare favorably
with the other techniques. - Data issues. The size of the sample used and the sources of data are oftentimes
critical in assessing the statistical validity of results as well as in the planning of
replication or extension type studies. As in many areas of empirical research, the
sophistication of the techniques is often not matched by the availability of good
data, especially data on failed firms. This problem tends to be more pronounced
in the smaller economies of some of the developed countries and in the case of
most developing countries. As is common in all empirical research, the ran-
domness and the size of the sample used are mentioned because they are gener-
ally indicative of the degree of confidence that may be placed in the conclusions
being drawn. - Definition of “failure” and “nonfailure”. Most models employ a sample of two
a priorigroups consisting of “failed” and “nonfailed” firms. Depending on the
inclination of the researcher or on the local conditions, the definition of a fail-
ure may vary. Some examples are bankruptcy filing by a company, bond default,
bank loan default, delisting of a company, government intervention via special
financing, and liquidation. Closely tied to the failure event is the date of the
event. The quality of almost all conclusions drawn about how “early” the dis-
tress prediction was depends upon where the analyst placed the date of failure.
The healthy firms’ data is, by definition, “censored” data because all that can be
said of the healthy firms is that they were healthy at the time the sample was
taken. It has been found, for example, that some firms that appear to be Type II
errors by a model (healthy firms classified as failures) turned out to have failed
at a later time. - Test results. It is customary to expect test statistics (such as the t and F statistics)
to indicate the statistical significance of the findings. While this is done to es-
tablish a baseline for measurement, it is important to note that useful conclu-
sions may be drawn from even small sample studies. In-sample and Out-of-
sample or hold-out results, Type I and Type II results, and analyst-modified
results are also reported where available.
(a) Developing and Developed Country Models. The failure prediction models re-
viewed in this chapter may be broadly grouped into two homogeneous categories: de-
veloped country models and developing country models. The classification of a coun-
10.1 INTRODUCTION 10 • 3