Parallel Computing Primer


Basic Message Passing

<< Process Creation And Execution | Table of Contents | MPI Function Reference >>
When using the SPMD computational model, each process has its own copy of all program variables in its own memory. If the processes are to use shared data, that data must be transmitted among the processes via messages. MPI provides a number of routines for message passing. In this section, we explore the basic MPI message passing routines.

Your First MPI Program

Consider the application of computing the sum of n real values between 1.0 and 1 million with equal steps between the n values. A sequential implementation appears below

(:sourcefile filename=sumseq.cc lang=cpp:)

If n is large, this can be time consuming. To improve the speed, we can create a parallel version which lets each process compute a subrange of the values. The subtotals computed by each process can then be added together to determine the final sum. The parallel solution requires messages between the processes.

Broadcasting Messages

MPI provides a broadcast mechanism in which a process can transmit data to all processes in a given communications group using a single instruction.

In our example problem, the root node must send the value of n to the compute nodes so they will know the increment value. The nodes can then determine the range for which they are responsible using their pid.

The MPI routine for broadcasting messages is shown below

(:source lang=cpp:)
MPI_Bcast( &n, 1, MPI_INT, 0, MPI_COMM_WORLD );

where we specify the number of elements address of the first element to be broadcast. The MPI_INT argument indicates the data along with the number of elements


<< Process Creation And Execution | Table of Contents | MPI Function Reference >>

Print - Changes - Search
Last modified: April 25, 2007, at 10:03 AM.