LAM/MPI logo

MPI vs. PVM

  |   Home   |   Download   |   Documentation   |   FAQ   |  

Top 10 reasons to prefer MPI over PVM

Plain list

  1. MPI has more than one freely available, quality implementation.
  2. MPI defines a 3rd party profiling mechanism.
  3. MPI has full asynchronous communication.
  4. MPI groups are solid, efficient, and deterministic.
  5. MPI efficiently manages message buffers.
  6. MPI synchronization protects 3rd party software.
  7. MPI can efficiently program MPP and clusters.
  8. MPI is totally portable.
  9. MPI is formally specified.
  10. MPI is a standard.

Annotated list

  1. MPI has more than one freely available, quality implementation.

    There are at least LAM and MPICH. The choice of development tools is not coupled to the programming interface.

  2. MPI defines a 3rd party profiling mechanism.

    A tool builder can extract profile information from MPI applications by supplying the MPI standard profile interface in a separate library, without ever having access to the source code of the main implementation.

  3. MPI has full asynchronous communication.

    Immediate send and receive operations can fully overlap computation.

  4. MPI groups are solid, efficient, and deterministic.

    Group membership is static. There are no race conditions caused by processes independently entering and leaving a group. New group formation is collective and group membership information is distributed, not centralized.

  5. MPI efficiently manages message buffers.

    Messages are sent and received from user data structures, not from staging buffers within the communication library. Buffering may, in some cases, be totally avoided.

  6. MPI synchronization protects the user from 3rd party software.

    All communication within a particular group of processes is marked with an extra synchronization variable, allocated by the system. Independent software products within the same process do not have to worry about allocating message tags.

  7. MPI can efficiently program MPP and clusters.

    A virtual topology reflecting the communication pattern of the application can be associated with a group of processes. An MPP implementation of MPI could use that information to match processes to processors in a way that optimizes communication paths.

  8. MPI is totally portable.

    Recompile and run on any implementation. With virtual topologies and efficient buffer management, for example, an application moving from a cluster to an MPP could even expect good performance.

  9. MPI is formally specified.

    Implementations have to live up to a published document of precise semantics.

  10. MPI is a standard.

    Its features and behaviour were arrived at by consensus in an open forum. It can change only by the same process.