Chapter 12
Random walks and the Metropolis algorithm
The way that can be spoken of is not the constant way. (Tao Te Ching, Book I, I.1)Lao Tzu
AbstractWe present the theory of random walks, Markov chains and present the Metropolis
algorithm.
12.1 Motivation
In the previous chapter we discussed technical aspects of Monte Carlo integration such as
algorithms for generating random numbers and integration of multidimensional integrals.
The latter topic served to illustrate two key topics in MonteCarlo simulations, namely a
proper selection of variables and importance sampling. An intelligent selection of variables,
good sampling techniques and guiding functions can be crucial for the outcome of our Monte
Carlo simulations. Examples of this will be demonstrated inthe chapters on statistical and
quantum physics applications. Here we make a detour from this main area of applications.
The focus is on diffusion and random walks. Furthermore, we will use these topics to derive
the famous Metropolis algorithm.
The rationale for this is that the tricky part of an actual Monte Carlo simulation resides in
the appropriate selection of random states, and thereby numbers, according to the probability
distribution (PDF) at hand.
Suppose our PDF is given by the well-known normal distribution. Think of for example the
velocity distribution of an ideal gas in a container. In our simulations we could then accept or
reject new moves with a probability proportional to the normal distribution. This would par-
allel our example on the sixth dimensional integral in the previous chapter. However, in this
case we would end up rejecting basically all moves since the probabilities are exponentially
small in most cases. The result would be that we barely moved from the initial position. Our
statistical averages would then be significantly biased andmost likely not very reliable.
Instead, all Monte Carlo schemes used are based on Markov processes in order to generate
new random states. A Markov process is a random walk with a selected probability for making
a move. The new move is independent of the previous history ofthe system. The Markov
process is used repeatedly in Monte Carlo simulations in order to generate new random
states. The reason for choosing a Markov process is that whenit is run for a long enough time
starting with a random state, we will eventually reach the most likely state of the system. In
thermodynamics, this means that after a certain number of Markov processes we reach an
equilibrium distribution. This mimicks the way a real system reaches its most likely state at
a given temperature of the surroundings.
381