The Mathematics of Financial Modelingand Investment Management

(Brent) #1

23-RiskManagement Page 747 Wednesday, February 4, 2004 1:13 PM


Risk Management 747

than in investment banking. Many asset management firms consider the
occurrence of losses due to operational risk to be irrelevant.

RISK MEASURES


Risk is embodied in a probability distribution of returns or of possible
losses. From a management point of view it is interesting to collapse this
probability distribution in a single number. The problem of measuring
risk with a single number has received much attention, even in contexts
other than finance.
Historically, the first measure of the risk contained in a distribution
is its variance, or the standard deviation, which is the square root of the
variance. The variance of a distribution gives an indication as to
whether the distribution is concentrated around some value or spread
over a large interval of values. If the standard deviation of a distribution
is high, it means that there is a high probability that the variable might
take values significantly different from its mean. A high standard devia-
tion, therefore, corresponds to a high risk. In the terminology of risk
management, standard deviation represents unexpected loss (UL).
Because risk is uncertainty (lack of information), the question of the
information conveyed by a probability distribution has led to the con-
cept of information and to Information Theory. In the case of finite
probabilities, information (I) in the sense of Information Theory is
defined as the average of the logarithms of probabilities (pi):

N

I = ∑ pilog pi

i = 1

Information reaches its maximum when the probability is concen-
trated in only one outcome, that is, pi = 1 for i = k, pi = 0 for i ≠ k. In
this case information is zero as the information of an outcome with
probability zero is conventionally set to zero. Information reaches its
minimum when all probabilities are equal, that is, when there is maxi-
mum uncertainty on the future outcome. In this case information is neg-
ative: I = –N log N. There is no lower bound to information.
Information with a minus sign is well known in statistical physics as
entropy, which is a measure of disorder: E = –I. The information associated
with an equi-probable binary scheme, that is, the information associated
with the choice between two equally probable possible outcome, is called
Free download pdf