Springer Finance

(Elliott) #1
104 3 Brownian Motion

Remark 3.4.4. In the proof above, we derived the equations (3.4.6) and (3.4. 7 ):


and
Var[(W(ti+I)-W(tJ))^2 ] = 2(ti+l-ti)^2 •
It is tempting to argue that when ti+l -ti is small, (ti+l-tj)^2 is very small,
and therefore (W(tj+l)-W (tJ))^2 , although random, is with high probability
near its mean ti+l-ti. We could therefore claim that

(3.4.8)

This approximation is trivially true because, when ti+l -ti is small, both
sides are near zero. It would also be true if we squared the right-hand side,
multiplied the right-hand side by 2 , or made any of several other significant
changes to the right-hand side. In other words, (3.4.8) really has no content.
A better way to try to capture what we think is going on is to write

instead of (3.4.8). However,

(W(ti+I)-W(ti ))^2
tj+l-tj

(3.4.9)

is in fact not near 1, regardless of how small we make tj+l-ti. It is the square
of the standard normal random variable

and its distribution is the same, no matter how small we make ti+1-ti.
To understand better the idea behind Theorem 3.4.3, we choose a large
value of n and take ti = i£-, j = 0, 1, ... , n. Then tj+l -ti = � for all j and

Since the random variables Y 1 , Y 2 , ... , Yn are independent and identically dis­

tributed, the Law of Large Numbers implies that 2::: 7 �� Y� 1 converges to the


common mean JE}'?+l as n ---+ oo. This mean is 1, and hence 2::: 7 �� (W (tj+l)­


W(tj))^2 converges to T. Each of the terms (W(ti+1)-W(t^1 ))^2 in this sum


can be quite different from its mean t (^1) +1 -t 1 = �' but when we sum many
terms like this, the differences average out to zero.

Free download pdf