Computational Physics

(Rick Simeone) #1
16.3 Parallelism 549

compilers would take over this job from us, but unfortunately we are still far from
this utopia. There exist various programming paradigms which are in use for parallel
architectures. First of all, we havedata-parallelprogramming versusmessage-
passingprogramming. The first is the natural option for shared memory systems,
although in principle it is not restricted to this type of architecture.
In data-parallel programming, all the data are declared in a single program, and
at run-time these data must be allocated either in the shared memory or suitably
distributed over the local memories, in which case the compiler should organise
this process. For example, if we want to manipulate a vectora[N], we declare the
full vector in our program, and this is either allocated as a vector in the shared
memory or it is chopped into segments which are allocated to the local memory
of the processors involved. The message-passing model, however, is suitable only
for distributed memory machines. Each processor runs a program (they are either
all the same or all different, according to whether we are dealing with a SIMD or
MIMD machine) and the data are allocated locally by that program. This means
that if the program starts with a declaration of a real variablea, this variable is
allocated at each node – together these variables may form a vector.
As an example we consider the problem in which we declare the vectora[N]and
initialise this toa[i]=i,i=1,...,N. Then we calculate:


FORi=1TONDO
a[i]=a[i]+a[((iāˆ’ 1 )MOD N)+ 1 ];
END DO

In Fortran 90, in the data-parallel model, this would read:


1 INTEGER, PARAMETER :: N=100! Declaration of
! array size
2 INTEGER, DIMENSION(N) :: A, ARight! Declare arrays


3 DO I=1, N! Initialise A


4 A(I)=I


5 END DO
6 ARight=CSHIFT(A, SHIFT=-1, DIM = 1)! Circular shift
!ofA
7 A=A+ARight! Add A and
! ARight
! result stored
!inB

Free download pdf