The use of clusters of SMP machines is a growing trend in the
clustering world. With LAM 7.0, many common collective operations are
optimized to take advantage of the higher communication speed between
processes on the same machine. When using the SMP-aware collectives,
performance increases can be seen with little or no changes in user
applications. Be sure to read the LAM User's Guide for important
information on exploiting the full potential of the SMP-aware
Integration with PBS
PBS (either OpenPBS or PBS Pro) provides scheduling
services for many of the high performance clusters in service today.
By using the PBS-specific boot mechanisms, LAM is able to provide
process accounting and job cleanup to MPI applications. As an added
bonus to MPI users,
lamboot execution time is drastically
reduced when compared to
Integration with BProc
The BProc distributed
process space provides a single process space for an entire cluster.
It also provides a number of mechanisms for starting applications not
available on the compute nodes of a cluster. LAM's BProc support
supports booting under the BProc environment, even when LAM is not
installed on the compute nodes -- LAM will automatically migrate the
required support out to the compute nodes. MPI applications still
must be available on all compute nodes (although the
mpirun eliminates this requirement).
LAM 7.0 includes beta support for execution in the Globus Grid environment. Be sure to
read the release notes in the User's Guide for important restrictions
on your Globus environment.
Extensible Component Architecture
LAM 7.0 is the first LAM release to include the System Services
Interface (SSI), providing an extensible component architecture for
LAM/MPI. Currently, "drop-in" modules are supported for booting the
LAM run-time environment, MPI collectives, Checkpoint/Restart, and MPI
transport (RPI). Selection of a component is a run-time decision,
allowing for user selection of the modules that provide the best
performance for a specific application.