Computational Physics

(Rick Simeone) #1

570 Appendix A


Stability

As noted above, some differential equations are susceptible to instabilities in the
numerical solution. In first order, one-dimensional equations, taking a small time
step usually guarantees stability, but in higher dimensions or for higher orders it is
not so easy to avoid instabilities. The reason is that such equations have in general
more than one independent solution; the initial values determine how these are
combined into the actual solution. The situation often occurs that the solution we
are after is a damped one (in the direction in which we integrate), but there exists a
growing solution to the equation too which will mix into our numerical solution and
spoil the latter as it grows, just as in the recursion problems discussed in Appendix
A2. If possible, one should integrate in the other direction with the appropriate
initial conditions, so that the required solution is the growing one, as instabilities
are absent in that case.
Sometimes, however, the problem cannot be solved so easily, and this is particu-
larly the case when the two solutions have a very different time scale. There might,
for example, be a combination of a fast and a slow oscillation, where the time scale
of the fast one determines the time step for which the integration is stable. More
tricky is the existence of a very strongly damped term in the solution, which is
easily overlooked (because it damps out so quickly) but might spoil the numerical
solution if the time step used is not adapted to the strongly damped term. Equations
thatsufferfromthistypeofproblemarecalled‘stiffequations’[ 4 ]. For more details
aboutstiffequationsseeRef.[ 4 ].


The Runge–Kutta method

This is a frequently used method for the solution of ordinary differential equations.
Like most methods for solving differential equations, the Runge–Kutta method can
be considered as a step-wise improvement of Euler’s forward method, the latter
predicting the solution to the differential equation (A.33), stating that, givenx( 0 ),
x(t)fort=his given by


x(h)=x( 0 )+hf[x( 0 ),0]+O(h^2 ). (A.36)

It is clear that if we use Euler’s rule halfway across the interval(0,h)to estimate
x(h/ 2 ), and evaluatef(x,t)for this value ofxandt=h/2, we obtain a better
estimate forx(h):


k 1 =hf[x( 0 ),0]

k 2 =hf

[


x( 0 )+

1


2


k 1 ,h/ 2

]


x(h)=x( 0 )+k 2 +O(h^3 ). (A.37)
Free download pdf