Computational Physics - Department of Physics

(Axel Boer) #1

134 5 Numerical Integration


problem, or memory problems, or that a so-called startup time penalty known as latency may
slow down the transfer of data. Crucial here is the rate at which messages are transferred


5.5.3 MPI with simple examples


When we want to parallelize a sequential algorithm, there are at least two aspects we need
to consider, namely



  • Identify the part(s) of a sequential algorithm that can be executed in parallel. This can be
    difficult.

  • Distribute the global work and data amongPprocessors. Stated differently, here you need
    to understand how you can get computers to run in parallel. From a practical point of view
    it means to implement parallel programming tools.


In this chapter we focus mainly on the last point. MPI is then atool for writing programs
to run in parallel, without needing to know much (in most cases nothing) about a given ma-
chine’s architecture. MPI programs work on both shared memory and distributed memory
machines. Furthermore, MPI is a very rich and complicated library. But it is not necessary to
use all the features. The basic and most used functions have been optimized for most machine
architectures
Before we proceed, we need to clarify some concepts, in particular the usage of the words
process and processor. We refer to process as a logical unit which executes its own code,
in an MIMD style. The processor is a physical device on which one or several processes are
executed. The MPI standard uses the concept process consistently throughout its documen-
tation. However, since we only consider situations where one processor is responsible for one
process, we therefore use the two terms interchangeably in the discussion below, hopefully
without creating ambiguities.
The six most important MPI functions are



  • MPI_ Init - initiate an MPI computation

  • MPI_Finalize - terminate the MPI computation and clean up

  • MPI_Comm_size - how many processes participate in a given MPI computation.

  • MPI_Comm_rank - which rank does a given process have. The rank is a number between 0
    and size-1, the latter representing the total number of processes.

  • MPI_Send - send a message to a particular process within an MPI computation

  • MPI_Recv - receive a message from a particular process within an MPI computation.


The first MPI C++ program is a rewriting of our ’hello world’ program (without the com-
putation of the sine function) from chapter 2. We let every process write "Hello world" on the
standard output.


http://folk.uio.no/mhjensen/compphys/programs/chapter05/program2.cpp
// First C++ example of MPI Hello world
using namespacestd;
#include<mpi.h>
#include


intmain (intnargs,char*args[])
{
intnumprocs, my_rank;
// MPI initializations
MPI_Init (&nargs, &args);
MPI_Comm_size (MPI_COMM_WORLD, &numprocs);
MPI_Comm_rank (MPI_COMM_WORLD, &my_rank);

Free download pdf