249
where 1 is the speed of mean reversion toward the long-run value u , and the
variance is proportional to the level of the interest rate, r.
● Generate a path for each stochastic variable.
● Value the portfolio given the random variables as inputs, that is, mark-to-market.
● Repeat the generation of random paths and portfolio valuation, say 10,000 times.
● Compile a distribution of portfolio values, from which a 2 standard deviation VaR
(95% confidence bands) can be measured.
As we discussed in K | V 4.1, portfolios generally contain more than one source of finan-
cial risk. The Monte Carlo methodology can be easily extended to a general multivariate
case, to account for any number of risk sources. Some random variables may be corre-
lated, others uncorrelated. In the case of correlation, a correlation coefficient must be
added. This is often done using Cholesky factorization.
Monte Carlo simulation is by far the most powerful method to calculate Value at Risk.
First and foremost, it can consider any number of financial risks: nonlinear price risk,
volatility risk, model risk, credit risk, time varying volatility, nonnormal distributions and
fat tails, implied parameters, and extreme or user-defined scenarios. The only drawback
is the computation cost. Monte Carlo is expensive in terms of time and money. When
portfolios contain complex derivative instruments, full portfolio valuation can take a long
time and a lot of technological and intellectual horsepower to calculate. We recommend
that these calculations be done using standard, internal, existing systems or third-party
stand-alone systems COTS. The reason for this recommendation is twofold:
- The calculations in these systems should be industry standard, not proprietary.
- The time to build these complex systems can be measured in years. To get a trading
system up and running quickly, a firm should not spend six months to a year to get
an attribution system and/or VaR system built. (As with all vendor components, the
organization must confirm COTS package ’ s ability to satisfy the intended applica-
tion when used in the monitoring and measuring processes. This confirmation must
be accomplished prior to initial use and reconfirmed as necessary.)
We do recommend that you program a custom database that stores all the inputs and
outputs of these systems, so that all the output data can be graphed and monitored using
SPC, and data mined during the kaizen process. The proper storage of these results and
analysis should allow a system to beat any competition that does not store and statisti-
cally analyze the performance output data for continuous improvement.
We recommend that you adopt a VaR framework, not only to view risk, but also to
have a tool that accurately measures “ what-if ” scenarios. This what-if analysis becomes
a key tool of the kaizen process. The kaizen team can make changes to the portfolio con-
struction and run the Value at Risk. The results will either confirm or disprove the root
cause of the system and allow a kaizen team to move forward with scenario analysis.
27.4. Summary
While many, mostly small, trading groups ignore risk metrics or perform the minimum
calculations, we recommend all firms embrace real-time performance and risk metrics.
Firms should view risk and performance as a continuum from single point estimates to
beating a benchmark to what-if scenarios.
27.4. SUMMARY