Computational Physics

(Rick Simeone) #1

300 The Monte Carlo method


from uncorrelated chains – it is defined in terms of the transition probabilityT(X→
X′)for having the objectX′succeeding objectXin the sequence. The probability
of having a sequence of objectsXithen becomes


PN(X 1 ,X 2 ,...,XN)=P 1 (X 1 )T(X 1 →X 2 )T(X 2 →X 3 )···T(XN− 1 →XN).
(10.10)
The transition probabilitiesT(X→X′)are normalised:


X′

T(X→X′)=1. (10.11)


As an example, consider a random walk on a two-dimensional square lattice. At
every step, the random walker can jump from a point to each of its four nearest
neighbours with equal probabilities 1/4. This probability is, however, independent
ofhowthe walker got there, that is, which path he followed in order to arrive at the
present point. An example of a non-Markovian (but correlated) sequence is given
by the self-avoiding random walk in which the walker is not allowed to visit a site
that has been visited in the past. Therefore, the probability for being at a specific
site depends not just on the last position but on the full history of the walk.
We want to generate a Markov chain of system configurations such that they
have a distribution proportional to exp[−βE(X)], and this distribution should be
independent of the position within the chain and independent of the initial configur-
ation. Under certain conditions, a Markov chain does indeed yield such an invariant
distribution, at least for long times, as it needs some time to ‘forget’ the chosen
initial distribution. We shall not go into details, nor give proofs, but summarise
these conditions. They are: (i) every configuration which we want to be included
in the ensemble should be accessible from every other configuration within a finite
number of steps (this property is calledconnectedness,orirreducibility) and (ii)
there should be no periodicity. Periodicity means that after visiting a particular
configuration, it should not be possible to return to the same configuration except
aftert=nksteps,n=1, 2, 3,..., wherekis fixed. A Markov chain that satisfies
these criteria is calledergodic.
The Metropolis Monte Carlo method consists of generating a Markov chain of
configurations with the required invariant distribution, which in our case is the
Boltzmann distribution exp(−βH). We must therefore find a transition probability
T(X→X′)which leads to a given stationary distributionρ(X)(to keep the follow-
ing analysis general, we use the arbitrary positive functionρ(X)rather than specify
this to be the Boltzmann distribution). The number of possible configurations,N,
is assumed to be finite as the computer memory in which they are to be stored is
finite. The ‘matrix’T(X→X′)therefore hasN^2 elements, as opposed to onlyN
elements of the ‘vector’ρ(X). This means that there are many different solutions
to this problem and we are allowed some freedom in finding one.

Free download pdf