PROBABILITY
and differentiate it repeatedly with respect toα(see section 5.12). Thus, we obtain
dI
dα
=−
∫∞
−∞
y^2 exp(−αy^2 )dy=−^12 π^1 /^2 α−^3 /^2
d^2 I
dα^2
=
∫∞
−∞
y^4 exp(−αy^2 )dy=(^12 )(^32 )π^1 /^2 α−^5 /^2
..
.
dnI
dαn
=(−1)n
∫∞
−∞
y^2 nexp(−αy^2 )dy=(−1)n(^12 )(^32 )···(^12 (2n−1))π^1 /^2 α−(2n+1)/^2.
Settingα=1/(2σ^2 ) and substituting the above result into (30.55), we find (forkeven)
νk=(^12 )(^32 )···(^12 (k−1))(2σ^2 )k/^2 = (1)(3)···(k−1)σk.
One may also characterise a probability distributionf(x) using the closely
relatednormalisedand dimensionless central moments
γk≡
νk
ν 2 k/^2
=
νk
σk
.
From this set,γ 3 andγ 4 are more commonly called, respectively, theskewness
andkurtosisof the distribution. The skewnessγ 3 of a distribution is zero if it is
symmetrical about its mean. If the distribution is skewed to values ofxsmaller
than the mean thenγ 3 <0. Similarlyγ 3 >0 if the distribution is skewed to higher
values ofx.
From the above example, we see that the kurtosis of the Gaussian distribution
(subsection 30.9.1) is given by
γ 4 =
ν 4
ν 22
=
3 σ^4
σ^4
=3.
It is therefore common practice to define theexcess kurtosisof a distribution
asγ 4 −3. A positive value of the excess kurtosis implies a relatively narrower
peak and wider wings than the Gaussian distribution with the same mean and
variance. A negative excess kurtosis implies a wider peak and shorter wings.
Finally, we note here that one can also describe a probability density function
f(x) in terms of itscumulants, which are again related to the central moments.
However, we defer the discussion of cumulants until subsection 30.7.4, since their
definition is most easily understood in terms of generating functions.
30.6 Functions of random variables
SupposeXis some random variable for which the probability density function
f(x) is known. In many cases, we are more interested in a related random variable
Y=Y(X), whereY(X) is some function ofX. What is the probability density