Process Creation And Execution<< BasicsOfMPI | Table of Contents | Basic Message Passing >>
When an MPI parallel program is executed using the command %> mpirun bigsum -np 4
In this example, 4 processes will be created by executing the program InitializationWhen an MPI program is executed, but before any MPI functions are used, the MPI system must be initialized using // basic1.cc #include <mpi.h> int main( int argc, char *argv[] ) { /* Initialize MPI. */ MPI_Init( &argc, &argv ); /* Parallel solution goes here. */ /* Terminate MPI. */ MPI_Finalize(); } All processes are assigned to the world communications class called When a process is created, it is assigned a unique rank or number starting in the range 0 to p - 1 where p is the total number of processes. In many applications, the rank of a processor and total number of processes is necessary for communications. This information can be obtained using MPI functions (:source lang=cpp:)/* basic2.cc */ #include <mpi.h> int myPID; int numProcs; int main( int argc, char *argv[] ) { /* Initialize MPI. */ MPI_Init( &argc, &argv ); /* Get processor information. */ MPI_Comm_size( MPI_COMM_WORLD, &numProcs ); MPI_Comm_rank( MPI_COMM_WORLD, &myPID ); : : /* Terminate MPI. */ MPI_Finalize(); } Note that each process is running its own copy of the program and thus each process has its own copy of the variables as illustrated below The individual processes will each be assigned a different process id by MPI which allows us to distingish between the various processes. The SPMD ModelWhen using the SPMD computational model, the code is typically organized into master-slave or root-compute components. Thus, one of the processes will act as the root process to coordinate the actions of the compute processes. Though one process is called the root, that process itself may also act as a computational node. The process id is used to facilitate the selection of the root node as shown below (:source lang=cpp:)// basic3.cc #include <mpi.h> int myPID; int numProcs; int main( int argc, char *argv[] ) { /* Initialize MPI. */ MPI_Init( &argc, &argv ); /* Get processor information. */ MPI_Comm_size( MPI_COMM_WORLD, &numProcs ); MPI_Comm_rank( MPI_COMM_WORLD, &myPID ); if( myPID == 0 ) rootNode(); else computeNode(); /* Terminate MPI. */ MPI_Finalize(); } With this structure, one process will execute the Sometimes, it is necessary to make sure all processes have reached a certain point in execution before continuing our operations. In MPI, we can set a barrier to prevent any process within a given communications group from continuing until all processes have reached the barrier. (:source lang=cpp:)// basic4.cc #include <mpi.h> int myPID; int numProcs; int main( int argc, char *argv[] ) { /* Initialize MPI. */ MPI_Init( &argc, &argv ); /* Get processor information. */ MPI_Comm_size( MPI_COMM_WORLD, &numProcs ); MPI_Comm_rank( MPI_COMM_WORLD, &myPID ); /* All processes must stop until all have reached this point. */ MPI_Barrier( MPI_COMM_WORLD ); /* Execute certain part of the program based on PID. */ if( myPID == 0 ) rootNode(); else computeNode(); /* Terminate MPI. */ MPI_Finalize(); }
|