Parallel Processing & Distributed Systems - Chapter 2: MPI - Thoai Nam

TERMs (1)
 Blocking
If return from the procedure indicates the user is allowed to reuse
resources specified in the call
 Non-blocking
If the procedure may return before the operation completes, and
before the user is allowed to reuse resources specified in the call
 Collective
If all processes in a process group need to invoke the procedure
 Message envelope
Information used to distinguish messages and selectively receive
them
pdf 58 trang thamphan 26/12/2022 3400
Bạn đang xem 20 trang mẫu của tài liệu "Parallel Processing & Distributed Systems - Chapter 2: MPI - Thoai Nam", để tải tài liệu gốc về máy hãy click vào nút Download ở trên.

File đính kèm:

  • pdfparallel_processing_distributed_systems_chapter_2_mpi_thoai.pdf

Nội dung text: Parallel Processing & Distributed Systems - Chapter 2: MPI - Thoai Nam

  1. MPI THOAI NAM
  2. TERMs (1)  Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call  Non-blocking If the procedure may return before the operation completes, and before the user is allowed to reuse resources specified in the call  Collective If all processes in a process group need to invoke the procedure  Message envelope Information used to distinguish messages and selectively receive them
  3. MPI  Environment  Point-to-point communication  Collective communication  Derived data type  Group management
  4. Environment  MPI_INIT  MPI_COMM_SIZE  MPI_COMM_RANK  MPI_FINALIZE  MPI_ABORT
  5. MPI_Finalize  Usage int MPI_Finalize (void);  Description – Terminates all MPI processing – Make sure this routine is the last MPI call. – All pending communications involving a process have completed before the process calls MPI_FINALIZE
  6. MPI_Comm_Rank  Usage – int MPI_Comm_rank ( MPI_Comm comm,/* in */ int* rank ); /* out */  Description – Returns the rank of the local process in the group associated with a communicator – The rank of the process that calls it in the range from 0 size - 1
  7. Simple Program #include “mpi.h” int main( int argc, char* argv[] ) { int rank; int nproc; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); /* write codes for you */ MPI_Finalize(); }
  8. Communication Modes in MPI (1)  Standard mode – It is up to MPI to decide whether outgoing messages will be buffered – Non-local operation – Buffered or synchronous?  Buffered(asynchronous) mode – A send operation can be started whether or not a matching receive has been posted – It may complete before a matching receive is posted – Local operation
  9. Communication Modes in MPI (3)  Ready mode – A send operation may be started only if the matching receive is already posted – The completion of the send operation does not depend on the status of a matching receive and merely indicates the send buffer can be reused – EAGER_LIMIT of SP system
  10. MPI_Recv  Usage int MPI_Recv( void* buf, /* out */ int count, /* in */ MPI_Datatype datatype, /* in */ int source, /* in */ int tag, /* in */ MPI_Comm comm, /* in */ MPI_Status* status ); /* out */  Description – Performs a blocking receive operation – The message received must be less than or equal to the length of the receive buffer – MPI_RECV can receive a message sent by either MPI_SEND or MPI_ISEND
  11. Sample Program for Blocking Operations (1) #include “mpi.h” int main( int argc, char* argv[] ) { int rank, nproc; int isbuf, irbuf; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank );
  12. MPI_Isend  Usage int MPI_Isend( void* buf, /* in */ int count, /* in */ MPI_Datatype datatype, /* in */ int dest, /* in */ int tag, /* in */ MPI_Comm comm, /* in */ MPI_Request* request ); /* out */  Description – Performs a nonblocking standard mode send operation – The send buffer may not be modified until the request has been completed by MPI_WAIT or MPI_TEST – The message can be received by either MPI_RECV or MPI_IRECV.
  13. MPI_Irecv (2)  Description – Performs a nonblocking receive operation – Do not access any part of the receive buffer until the receive is complete – The message received must be less than or equal to the length of the receive buffer – MPI_IRECV can receive a message sent by either MPI_SEND or MPI_ISEND
  14. Sample Program for Non-Blocking Operations (1) #include “mpi.h” int main( int argc, char* argv[] ) { int rank, nproc; int isbuf, irbuf, count; MPI_Request request; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); if(rank == 0) { isbuf = 9; MPI_Isend( &isbuf, 1, MPI_INTEGER, 1, TAG, MPI_COMM_WORLD, &request );
  15. Collective Operations  MPI_BCAST  MPI_SCATTER  MPI_SCATTERV  MPI_GATHER  MPI_GATHERV  MPI_ALLGATHER  MPI_ALLGATHERV  MPI_ALLTOALL
  16. MPI_Bcast (2)
  17. Example of MPI_Scatter (1) #include “mpi.h” int main( int argc, char* argv[] ) { int i; int rank, nproc; int isend[3], irecv; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank );
  18. Example of MPI_Scatter (3)
  19. Example of MPI_Scatterv(1) #include “mpi.h” int main( int argc, char* argv[] ) { int i; int rank, nproc; int iscnt[3] = {1,2,3}, irdisp[3] = {0,1,3}; int isend[6] = {1,2,2,3,3,3}, irecv[3]; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank );
  20. Example of MPI_Gather (1) #include “mpi.h” int main( int argc, char* argv[] ) { int i; int rank, nproc; int isend, irecv[3]; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank );
  21. MPI_Gather
  22. Example of MPI_Gatherv (1) #include “mpi.h” int main( int argc, char* argv[] ) { int i; int rank, nproc; int isend[3], irecv[6]; int ircnt[3] = {1,2,3}, idisp[3] = {0,1,3}; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &rank );
  23. MPI_Reduce (2)  Description – Applies a reduction operation to the vector sendbuf over the set of processes specified by communicator and places the result in recvbuf on root – Both the input and output buffers have the same number of elements with the same type – Users may define their own operations or use the predefined operations provided by MPI  Predefined operations – MPI_SUM, MPI_PROD – MPI_MAX, MPI_MIN – MPI_MAXLOC, MPI_MINLOC – MPI_LAND, MPI_LOR, MPI_LXOR – MPI_BAND, MPI_BOR, MPI_BXOR
  24. MPI_Reduce
  25. MPI_Scan  Usage int MPI_Scan( void* sendbuf, /* in */ void* recvbuf, /* out */ int count, /* in */ MPI_Datatype datatype, /* in */ MPI_Op op, /* in */ MPI_Comm comm); /* in */  Description – Performs a parallel prefix reduction on data distributed across a group – The operation returns, in the receive buffer of the process with rank i, the reduction of the values in the send buffers of processes with ranks 0 i
  26. MPI_Scan