MPI Jobs under MOAB / Torque


We assume here, that you know how to program MPI, and that you have a working MPI application.

Choose the right MPI version: MPI libraries and compilers must fit together – thus we provide different versions for different compilers.  You can check the available versions with the module command:

module avail mpi

The most common ones are mpi/gcc (OpenMPI for GCC), mpi/studio (OpenMPI for Sun/Oracle Studio) and mpi/intel (OpenMPI for Intel Compilers).   The default version is mpi/gcc.

Preparing your job script:   The installed OpenMPI version(s) are tightly integrated with the resource manager MOAB/Torque.  Thus,  mpirun gets most of its information, e.g. the number of processes directly from the scheduler.

Let us say, you have a MPI program, my_mpi_prog – you want to execute that program on 8 cores on a single node:

#!/bin/sh
#PBS -N MPIjob
#PBS -l nodes=1:ppn=8
# -- estimated wall clock time (execution time): hh:mm:ss --
#PBS -l walltime=24:00:00

# change to submission directory
if test X$PBS_ENVIRONMENT = XPBS_BATCH; then cd $PBS_O_WORKDIR; fi

# load the MPI module
module load mpi

mpirun ./my_mpi_prog

If you want to run the same program on 16 cores, on 2 nodes:

-l nodes=2:ppn=8

If you specify:

-l nodes=2:ppn=4

the scheduler is free to place either all 8 cores on one node, or to find 2 nodes using 4 cores on each.