Anon

(Dana P.) #1

Quantile Regressions 147


The results for nine quantiles are presented in Table 7.1. The intercept
term reflects returns received on the S&P 500 index if there are no dividends.
Not surprisingly, it is monotone upwards and it is positive and statistically
significant at and above the 70th return percentile. But our main interest is
in how expected dividend yield influences returns. For example, the effect
of expected dividend yield at the 10th return percentile is statistically not
different from zero. The effect of the dividend yield sharply increases at the
20th return percentile. A 1% increase in expected dividend yield will cause a
17.35% increase in the annualized S&P 500 returns. However, in spite of its
statistical significance, the effect of dividend yield drops off at the 30th and
40th return quantiles. The coefficient of dividend yield grows monotonically
from 14.38 at the 50th return percentile to 37.14 at the 90th return percentile.
Table 7.1 shows that coefficients of the dividend yield change across dif-
ferent quantiles. Now the question is are they statistically different. Koenker
and Bassett proposed a Wald test to verify this across the quantiles.^3 The
null hypothesis that the slope coefficients are all the same is given by


H 01 :βτβτβτ 11 == 21 ...= k

where β are slope coefficients and τ are quantiles. The null hypothesis imposes
k restrictions on the coefficients and a Wald statistic^4 is calculated which is
distributed as χ^2 with degrees of freedom equal to number of restrictions.
In our example, we have a total of eight restrictions (β 1 τ 1 = β 1 τ 2 , β 1 τ 2 =
β 1 τ 3 ,... , β 1 τ 9 = β 1 τ 10 ). The calculated Wald test statistic is 42.40 and the
critical χ^2 with eight degrees of freedom is 15.51. Since the test statistic is
greater than the critical value, we reject the null hypothesis that the slope
coefficients are the same across the quantiles.
Therefore, it is important to understand that to use quantile regressions,
we have to first empirically verify that the data are skewed with fat tails.
Second, using the quantile process, we have to examine the relationship
between the dependent and independent variables at each quantile. Finally,
it is important to statistically verify if the coefficients across the quantiles
are different.


(^3) Roger Koenker and Gilbert Bassett, “Tests of Linear Hypotheses and L 1 Estima-
tion,” Econometrica 50 (1982): 1577−1584.
(^4) This statistic is provided by software packages such as SAS, R, and Eviews. To calcu-
late the Wald statistic, a restricted regression model with same βs across the quantiles is
estimated to obtain sum of squared errors. This sum of squared errors statistic would
then be compared to the statistic of sum of squared errors of regressions where the βs
are allowed to vary across the quantiles. If the difference in the two sum of squared
statistics is not different from zero then we will be unable to reject the null hypothesis.

Free download pdf