Parallel Processing & Distributed Systems - Chapter 2: MPI - Thoai Nam
TERMs (1)
Blocking
If return from the procedure indicates the user is allowed to reuse
resources specified in the call
Non-blocking
If the procedure may return before the operation completes, and
before the user is allowed to reuse resources specified in the call
Collective
If all processes in a process group need to invoke the procedure
Message envelope
Information used to distinguish messages and selectively receive
them
Blocking
If return from the procedure indicates the user is allowed to reuse
resources specified in the call
Non-blocking
If the procedure may return before the operation completes, and
before the user is allowed to reuse resources specified in the call
Collective
If all processes in a process group need to invoke the procedure
Message envelope
Information used to distinguish messages and selectively receive
them
Bạn đang xem 20 trang mẫu của tài liệu "Parallel Processing & Distributed Systems - Chapter 2: MPI - Thoai Nam", để tải tài liệu gốc về máy hãy click vào nút Download ở trên.
File đính kèm:
- parallel_processing_distributed_systems_chapter_2_mpi_thoai.pdf
Nội dung text: Parallel Processing & Distributed Systems - Chapter 2: MPI - Thoai Nam
- MPI THOAI NAM
- TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking If the procedure may return before the operation completes, and before the user is allowed to reuse resources specified in the call Collective If all processes in a process group need to invoke the procedure Message envelope Information used to distinguish messages and selectively receive them
- MPI Environment Point-to-point communication Collective communication Derived data type Group management
- Environment MPI_INIT MPI_COMM_SIZE MPI_COMM_RANK MPI_FINALIZE MPI_ABORT
- MPI_Finalize Usage int MPI_Finalize (void); Description – Terminates all MPI processing – Make sure this routine is the last MPI call. – All pending communications involving a process have completed before the process calls MPI_FINALIZE
- MPI_Comm_Rank Usage – int MPI_Comm_rank ( MPI_Comm comm,/* in */ int* rank ); /* out */ Description – Returns the rank of the local process in the group associated with a communicator – The rank of the process that calls it in the range from 0 size - 1
- Simple Program #include “mpi.h” int main( int argc, char* argv[] ) { int rank; int nproc; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); /* write codes for you */ MPI_Finalize(); }
- Communication Modes in MPI (1) Standard mode – It is up to MPI to decide whether outgoing messages will be buffered – Non-local operation – Buffered or synchronous? Buffered(asynchronous) mode – A send operation can be started whether or not a matching receive has been posted – It may complete before a matching receive is posted – Local operation
- Communication Modes in MPI (3) Ready mode – A send operation may be started only if the matching receive is already posted – The completion of the send operation does not depend on the status of a matching receive and merely indicates the send buffer can be reused – EAGER_LIMIT of SP system
- MPI_Recv Usage int MPI_Recv( void* buf, /* out */ int count, /* in */ MPI_Datatype datatype, /* in */ int source, /* in */ int tag, /* in */ MPI_Comm comm, /* in */ MPI_Status* status ); /* out */ Description – Performs a blocking receive operation – The message received must be less than or equal to the length of the receive buffer – MPI_RECV can receive a message sent by either MPI_SEND or MPI_ISEND
- Sample Program for Blocking Operations (1) #include “mpi.h” int main( int argc, char* argv[] ) { int rank, nproc; int isbuf, irbuf; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank );
- MPI_Isend Usage int MPI_Isend( void* buf, /* in */ int count, /* in */ MPI_Datatype datatype, /* in */ int dest, /* in */ int tag, /* in */ MPI_Comm comm, /* in */ MPI_Request* request ); /* out */ Description – Performs a nonblocking standard mode send operation – The send buffer may not be modified until the request has been completed by MPI_WAIT or MPI_TEST – The message can be received by either MPI_RECV or MPI_IRECV.
- MPI_Irecv (2) Description – Performs a nonblocking receive operation – Do not access any part of the receive buffer until the receive is complete – The message received must be less than or equal to the length of the receive buffer – MPI_IRECV can receive a message sent by either MPI_SEND or MPI_ISEND
- Sample Program for Non-Blocking Operations (1) #include “mpi.h” int main( int argc, char* argv[] ) { int rank, nproc; int isbuf, irbuf, count; MPI_Request request; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); if(rank == 0) { isbuf = 9; MPI_Isend( &isbuf, 1, MPI_INTEGER, 1, TAG, MPI_COMM_WORLD, &request );
- Collective Operations MPI_BCAST MPI_SCATTER MPI_SCATTERV MPI_GATHER MPI_GATHERV MPI_ALLGATHER MPI_ALLGATHERV MPI_ALLTOALL
- MPI_Bcast (2)
- Example of MPI_Scatter (1) #include “mpi.h” int main( int argc, char* argv[] ) { int i; int rank, nproc; int isend[3], irecv; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank );
- Example of MPI_Scatter (3)
- Example of MPI_Scatterv(1) #include “mpi.h” int main( int argc, char* argv[] ) { int i; int rank, nproc; int iscnt[3] = {1,2,3}, irdisp[3] = {0,1,3}; int isend[6] = {1,2,2,3,3,3}, irecv[3]; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank );
- Example of MPI_Gather (1) #include “mpi.h” int main( int argc, char* argv[] ) { int i; int rank, nproc; int isend, irecv[3]; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank );
- MPI_Gather
- Example of MPI_Gatherv (1) #include “mpi.h” int main( int argc, char* argv[] ) { int i; int rank, nproc; int isend[3], irecv[6]; int ircnt[3] = {1,2,3}, idisp[3] = {0,1,3}; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank );
- MPI_Reduce (2) Description – Applies a reduction operation to the vector sendbuf over the set of processes specified by communicator and places the result in recvbuf on root – Both the input and output buffers have the same number of elements with the same type – Users may define their own operations or use the predefined operations provided by MPI Predefined operations – MPI_SUM, MPI_PROD – MPI_MAX, MPI_MIN – MPI_MAXLOC, MPI_MINLOC – MPI_LAND, MPI_LOR, MPI_LXOR – MPI_BAND, MPI_BOR, MPI_BXOR
- MPI_Reduce
- MPI_Scan Usage int MPI_Scan( void* sendbuf, /* in */ void* recvbuf, /* out */ int count, /* in */ MPI_Datatype datatype, /* in */ MPI_Op op, /* in */ MPI_Comm comm); /* in */ Description – Performs a parallel prefix reduction on data distributed across a group – The operation returns, in the receive buffer of the process with rank i, the reduction of the values in the send buffers of processes with ranks 0 i
- MPI_Scan