2.3 Real Numbers and Numerical Precision 19
! Now we read from screen the variable int2
WRITE(*,*)'Read in the number to be exponentiated'
READ(*,*) int2
int1=2**int2
WRITE(*,*)'2^N*2^N', int1*int1
int3=int1-1
WRITE(*,*)'2^N*(2^N-1)', int1*int3
WRITE(*,*)'2^N-1', int3
END PROGRAMinteger_exp
In Fortran the modulus division is performed by the intrinsic functionMOD(number,2)in case
of a division by 2. The exponentation of a number is given by for example (^2) **Ninstead of the
call to thepowfunction in C++.
2.3 Real Numbers and Numerical Precision
An important aspect of computational physics is the numerical precision involved. To design a
good algorithm, one needs to have a basic understanding of propagation of inaccuracies and
errors involved in calculations. There is no magic recipe for dealing with underflow, overflow,
accumulation of errors and loss of precision, and only a careful analysis of the functions
involved can save one from serious problems.
Since we are interested in the precision of the numerical calculus, we need to understand
how computers represent real and integer numbers. Most computers deal with real numbers
in the binary system, or octal and hexadecimal, in contrast to the decimal system that we
humans prefer to use. The binary system uses 2 as the base, in much the same way that the
decimal system uses 10. Since the typical computer communicates with us in the decimal sys-
tem, but works internally in e.g., the binary system, conversion procedures must be executed
by the computer, and these conversions involve hopefully only small roundoff errors
Computers are also not able to operate using real numbers expressed with more than a
fixed number of digits, and the set of values possible is only asubset of the mathematical
integers or real numbers. The so-called word length we reserve for a given number places a
restriction on the precision with which a given number is represented. This means in turn,
that for example floating numbers are always rounded to a machine dependent precision,
typically with 6-15 leading digits to the right of the decimal point. Furthermore, each such
set of values has a processor-dependent smallest negative and a largest positive value.
Why do we at all care about rounding and machine precision? The best way is to consider
a simple example first. In the following example we assume that we can represent a floating
number with a precision of 5 digits only to the right of the decimal point. This is nothing but
a mere choice of ours, but mimicks the way numbers are represented in the machine.
Suppose we wish to evaluate the function
f(x) =
1 −cos(x)
sin(x) ,
for small values ofx. If we multiply the denominator and numerator with 1 +cos(x)we obtain
the equivalent expression
f(x) =
sin(x)
1 +cos(x)
.
If we now choosex= 0. 006 (in radians) our choice of precision results in
sin( 0. 007 )≈ 0. 59999 × 10 −^2 ,