LinuxCommandLibrary

mpiexec

Run parallel programs using MPI

TLDR

View documentation for the original command

$ tldr mpirun
copy

SYNOPSIS

mpiexec [options] <program> [program_args]

Common invocation:
mpiexec -np <num_processes> [-hostfile <filename>] [other_options] <executable> [arguments]

PARAMETERS

-np <num_processes>
    Specifies the number of processes (MPI ranks) to launch for the parallel application.

-hostfile <filename>
    Provides a file listing the hostnames and optionally the number of available slots on each host, used for process distribution.

-wdir <directory>
    Sets the working directory for all launched processes. If not specified, processes often inherit the working directory of mpiexec.

-x <VAR[=VALUE]>
    Exports an environment variable to the execution environment of the launched processes. Useful for passing configuration or library paths.

-pernode
    Launches one process on each node specified in the hostfile or discovered by the resource manager, regardless of available slots per node.

-oversubscribe
    Allows the MPI runtime to launch more processes than the number of available CPU cores or allocated slots, potentially leading to performance degradation due to resource contention.

DESCRIPTION

The mpiexec command is a fundamental utility for launching parallel applications that utilize the Message Passing Interface (MPI) standard. It acts as a wrapper that orchestrates the execution of an MPI program across multiple processes, which can reside on a single machine or be distributed across a cluster of computing nodes.

When invoked, mpiexec typically interacts with the underlying MPI runtime environment (such as Open MPI, MPICH, or Intel MPI) to:
1. Allocate resources: Determine which nodes and how many cores/slots are available.
2. Establish communication: Set up the necessary infrastructure for inter-process communication.
3. Distribute processes: Launch instances of the specified MPI program on the allocated resources.
4. Manage execution: Oversee the parallel job, collecting output and handling termination.

It provides a consistent interface for users to run their parallel codes without needing to manually manage individual processes or network connections, making it an indispensable tool in high-performance computing (HPC) environments.

CAVEATS

The exact behavior and available options of mpiexec can vary significantly between different MPI implementations (e.g., Open MPI, MPICH, Intel MPI). Users should consult the specific documentation for their installed MPI library.

Successful multi-node execution often relies on proper SSH configuration (passwordless login) between nodes or integration with a workload manager like Slurm, PBS/Torque, or LSF.

Performance can be heavily influenced by network latency, resource allocation, and the efficiency of the underlying MPI library and system configuration.

HOSTFILE FORMAT

A hostfile typically lists hostnames, one per line, optionally followed by slots= or max-slots= to indicate the number of available processing units on that host. For example:
node01 slots=8
node02 slots=16

This allows mpiexec to distribute processes effectively.

ENVIRONMENT VARIABLES

Many MPI implementations allow fine-tuning behavior through environment variables. For instance, variables prefixed with OMPI_ for Open MPI or MPICH_ for MPICH can control communication parameters, debugging output, or resource allocation. Using -x with mpiexec is the primary way to pass these to the MPI application.

HISTORY

The concept of a unified parallel job launcher emerged with the early development of distributed memory parallel computing. While the MPI Standard itself (first published in 1994) defines the application programming interface, it does not strictly dictate how MPI programs are launched.

Consequently, various MPI implementations developed their own launchers. mpiexec became a common naming convention and a widely adopted command, often interchangeably used with mpirun. Its widespread use solidified its role as the de facto standard for launching MPI applications, providing a consistent user experience across different MPI libraries and underlying cluster infrastructures. Its evolution has mirrored the advancements in cluster management and high-performance interconnects.

SEE ALSO

mpirun(1), ssh(1), srun(1), mpicc(1), mpif90(1), hostfile(5)

Copied to clipboard