NAMD Jobs under LSF


Basic Instructions

NAMD is a scalable molecular dynamics software developed and actively maintained by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign (for documentation and tutorials: NAMD).
The version installed on the HPC (namd 2.9) makes use of the system openMPI libraries, and it is therefore well integrated with the resource manager MOAB/Torque. This means that the mpirun program invoked for executing the software gets the information about the number of processes directly from the scheduler.

NOTE: the program is available as a module, so you have first to load the module. Look at the modules page.

In the following we assume that you know how to build a working input file for NAMD, and let us assume that it is called MySystem.conf.
A simple script file for running your program is the following.

#!/bin/sh
# embedded options to bsub - start with #BSUB
### -- set the job Name --
#BSUB -JMyNamd
### –- specify queue -- 
#BSUB -q hpc 
### -- ask for number of cores (default: 1) -- 
#BSUB -n 8 
### -- set walltime limit: hh:mm -- 
#BSUB -W 10:00 
### -- specify that we need 2GB of memory per core/slot -- 
#BSUB -R "rusage[mem=2GB]"
### -- set the email address -- 
# please uncomment the following line and put in your e-mail address,
# if you want to receive e-mail notifications on a non-default address
##BSUB -u your_email_address
### -- send notification at start -- 
#BSUB -B 
### -- send notification at completion-- 
#BSUB -N 
### -- Specify the output and error file. %J is the job-id -- 
### -- -o and -e mean append, -oo and -eo mean overwrite -- 
#BSUB -o Output_%J.out 
#BSUB -e Error_%J.err 
# here follow the commands you want to execute 
# 
# load the necessary modules 
module load mpi/gcc-openmpi-1.6.5-lsfib
module load namd2

# -- program invocation here -- 
# mpirun namd2 MySystem.conf > MySystem.log

Call it as you like, for example MySystem.sh, put it in your working directory, and then

bsub < MySystem.sh

These lines are important:

module load mpi/gcc-openmpi-1.6.5-lsfib
module load namd2

They loads the module with the NAMD executable, that is called namd2, and the necessary MPI module.

mpirun namd2 MySystem.conf > MySystem.log

This is the real program invocation.
The program namd2 in invoked to be run taking MySystem.conf as input, and saving the output messages to MySystem.log. All the other NAMD output files will be saved according to the specifications you put in the NAMD configuration file MySystem.conf.

mpirun gets the information about the number of processes directly from the scheduler, as specified in the line

#BSUB -n 8

This will request 4 cores on a single node. You can make also more elaborate requests, like

#BSUB -n 8
#BSUB -R "span[ptile=4]"

This asks for 8 cores, grouped four by four. In this way the cores will be split on two nodes. This is only necessary if you need more than 20 cores in total, or if you need more memory than that available on a single node.
Please DO NOT specify explicitly the number of processes as mpirun argument.
If you need to put some extra mpirun argument, ask the HPC support.

Compatibility with older syntax

On some machines where network connectivity based on normal ethernet networking, NAMD uses the Charm++ communications layer and the program charmrun to launch namd2.
For backward compatibility, the charmrun program is also available in the mpi version, but in this case it is just a script to convert the charmrun in a correct mpirun call.
However, it requires the explicit specification of the number of processes, that overrides the scheduler. If launched without processes number, it defaults to 1.

So, the only correct way to use it is to invoke the program as

charmrun namd2 +p$LSB_DJOB_NUMPROC MySystem.conf > MySystem.log