12 S.P. Kothari and J.B. Warner
dependence is accounted for because the variability of the portfolio returns through
time incorporates whatever cross-dependence that exists among the returns on individ-
ual event securities.
The portfolio return approach has a drawback, however. To the extent the event pe-
riod is associated with increased uncertainty, i.e., greater return variability, the use of
historical or post-event time-series variability might understate the true variability of
the event-period abnormal performance. An increase in event-period return variability is
economically intuitive. The event might have been triggered by uncertainty-increasing
factors and/or the event itself causes uncertainty in the economic environment for the
firm. In either case, the event-period return variability is likely to exceed that during
other time periods for the event firms. Therefore, the statistical significance of the event-
window abnormal performance would be overstated if it is evaluated on the basis of
historical variability of the event-firm portfolio returns (Brown and Warner, 1980, 1985;
Collins and Dent, 1984). One means of estimating the likely increase in the variability
of event-period returns is to estimate the cross-sectional variability of returns during
the event and non-event periods. The ratio of the variances during the event period and
non-event periods might serve as an estimate of the degree of increase in the variability
of returns during the event period, which can be used to adjust for the bias in the test
statistic calculated ignoring the increased event-period uncertainty.^6
3.4. Criteria for “reliable” event study tests
Using the test statistics, errors of inference are of two types. A Type I error occurs when
the null hypothesis is falsely rejected. A Type II error occurs when the null is falsely
accepted. Accordingly, two key properties of event study tests have been investigated.
The first is whether the test statistic is correctly specified. A correctly-specified test
statistic yields a Type I error probability equal to the assumed size of the test. The second
concern is power, i.e., a test’s ability to detect abnormal performance when it is present.
Power can be measured as one minus the probability of a Type II error. Alternatively,
it can be measured as the probability that the null hypothesis will be rejected given a
level of Type I error and level of abnormal performance. When comparing tests that are
well-specified, those with higher power are preferred.
3.5. Determining specification and power
3.5.1. The joint-test problem
While the specification and power of a test can be statistically determined, economic
interpretation is not straightforward because all tests are joint tests. That is, event study
(^6) Use of non-parametric tests of significance, as suggested inCorrado (1989), might also be effective in
performing well-specified tests in the presence of increased event-period uncertainty.