NAMD Jobs under MOAB / Torque


Basic Instructions

NAMD is a scalable molecular dynamics software developed and actively maintained by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign (for documentation and tutorials: NAMD).
The version installed on the HPC (namd 2.9) makes use of the system openMPI libraries, and it is therefore well integrated with the resource manager MOAB/Torque. This means that the mpirun program invoked for executing the software gets the information about the number of processes directly from the scheduler.

NOTE: the program is available as a module, so you have first to load the module. Look at the modules page.

In the following we assume that you know how to build a working input file for NAMD, and let us assume that it is called MySystem.conf.
A simple script file for running your program is the following.

#!/bin/sh
# embedded options to qsub - start with #PBS
# -- job name --
#PBS -N MyNamd
# -- email me at the beginning (b) and end (e) of the execution --
#PBS -m be
# -- My email address --
# please uncomment the following line and put in your e-mail address,
# if you want to receive e-mail notifications on a non-default address
##PBS -M your_email_address
# -- estimated wall clock time (execution time): hh:mm:ss --
#PBS -l walltime=48:00:00
# -- parallel environment requests --
#PBS -l nodes=1:ppn=4
# -- end of PBS options --

# -- change to working directory
if test X$PBS_ENVIRONMENT = XPBS_BATCH; then cd $PBS_O_WORKDIR; fi
# -- load namd module --
module load namd2

# -- program invocation here --
#

mpirun namd2 MySystem.conf > MySystem.log

Call it as you like, for example MySystem.sh, put it in your working directory, and then

qsub MySystem.sh

Two lines are important:

module load namd2

loads the module with the NAMD executable, that is called namd2.

mpirun namd2 MySystem.conf > MySystem.log

This is the real program invocation.
The program namd2 in invoked to be run taking MySystem.conf as input, and saving the output messages to MySystem.log. All the other NAMD output files will be saved according to the specifications you put in the NAMD configuration file MySystem.conf.

mpirun gets the information about the number of processes directly from the scheduler, as specified in the line

#PBS -l nodes=1:ppn=4

This will request 4 cores on a single node. You can make also more elaborate requests, like

#PBS -l nodes=2:ppn=4

This asks for 8 cores, grouped four by four. Depending on the system resources available at the time of the submission, the calculation could end up using two nodes or 8 cores on a single node.
Please DO NOT specify explicitly the number of processes as mpirun argument.
If you need to put some extra mpirun argument, ask the HPC support.

Compatibility with older syntax

On some machines where network connectivity based on normal ethernet networking, NAMD uses the Charm++ communications layer and the program charmrun to launch namd2.
For backward compatibility, the charmrun program is also available in the mpi version, but in this case it is just a script to convert the charmrun in a correct mpirun call.
However, it requires the explicit specification of the number of processes, that overrides the scheduler. If launched without processes number, it defaults to 1.

So, the only correct way to use it is to invoke the program as

charmrun namd2 +p$PBS_NP MySystem.conf > MySystem.log