Mathematical Modeling in Finance with Stochastic Processes

(Ben Green) #1

102 CHAPTER 3. FIRST STEP ANALYSIS FOR STOCHASTIC PROCESSES


game


P[Tt+1=x+ 1|Tt=x] = p
P[Tt+1=x− 1 |Tt=x] = q
P[Tt+1=y|Tt=x] = 0 for ally 6 =x+ 1,x− 1

The most important property of a Markov process is that the probability of
being in the next state is completely determined by the current state and not
the history of how the process arrived at the current state. In that sense, we
often say that a Markov process is memory-less.
We can also note the fair coin-tossing game withp= 1/2 =qis amar-
tingale. That is, the expected value of the process at the next step is the
current value. Using expectation for estimation, the best estimate we have
of the gambler’s fortune at the next step is the current fortune:


E[Tn+1|Tn=x] = (x+ 1)(1/2) + (x−1)(1/2) =x.

This characterizes a fair gain, after the next step, one can neither expect to
be richer or poorer. Note that the coin-tossing games withp 6 =qdo not have
this property.
In later sections we have more occasions to study the properties of mar-
tingales, and to a lesser degree Markov processes.


Sources


This section is adapted from W. Feller, inIntroduction to Probability Theory
and Applications, Volume I, Chapter XIV, page 342, [15]. Some material is
adapted from [49] and [28]. Steele has an excellent discussion at about the
same level as I have done it here, but with a slightly more rigorous approach
to solving the difference equations. He also gives more information about the
fact that the duration is almost surely finite, showing that all moments of the
duration are finite. Karlin and Taylor give a treatment of the ruin problem by
direct application of Markov chain analysis, which is not essentially different,
but points to greater generality.


Problems to Work for Understanding



  1. Show the sequence [(q/p)n−1]/nis an increasing sequence for 0< p <
    1 / 2 < q < 1 ..

Free download pdf