OpenFOAM


 

General Info

OpenFOAM is an open source Computational Fluid Dynamics software Package used for simulations in many different fields in engineering. It has a modular structure, and consists of many different solvers and utilities for pre- and post-processing. For more information and documentation please refer to the OpenFOAM website.

Being a free and open-source code, OpenFOAM can be modified and re-compiled by the user. However, if you’re going to run it on the DTU HPC cluster we recommend to use one of the versions already installed on our system. The package is installed as a module, so you have to load it before using it. The latest version available is currently the 2.3.0.

 

Test/Interactive run

If you want to test it with a short interactive run, log in to the system and open an interactive session with

qrsh

or

linuxsh

Note: if you open a ssh session from the command line, remember that you are on the front-end node. You have to first switch to an interactive node to be able to load any module.

Load the necessary modules. For this example the 2.3.0 version is used:

module load OpenFoam/2.3.0/gcc-4.8.3-openmpi

If you want just to test the if everything works, you can just run one of the OpenFOAM examples. Create a directory for the test and copy the relevant files by typing the following commands in a terminal:

mkdir -p OpenFOAM/${USER}-2.3.0/run/test
cd OpenFOAM/${USER}-2.3.0/run/test
cp -Rv /appl/OpenFOAM/2.3.0/gcc-4.8.3-openmpi/OpenFOAM-2.3.0/tutorials/multiphase/interFoam/laminar .

Then run the example by typing:

cd laminar
./Allclean
./Allrun

Note: When loading the module, the usual openFOAM environment variables are set, together with the alias usually set at install time. For example, you can move to the tutorials directory by typing tut. However, always copy the relevant files to a folder in your own directory.

 

OpenFOAM job scripts

If you came to the HPC to use OpenFOAM you are probably interested in running it in batch mode, to take advantage of the cluster computational resources. OpenFOAM is compiled against OpenMPI and can therefore run in parallel both on a single node and across multiple nodes. Remember that you have to prepare your model (“case” in OpenFOAM language) so that it is correctly set-up to run in parallel, otherwise it will not work as you would like to. We show here two simple job script examples, one for a serial job, and one for a parallel one.

Serial job

We use the same files used for the previous interactive example. According to the general instructions for writing a job script the first part of the script contains the information for the resource manager, and the second part the user’s commands:

#!/bin/sh
# embedded options to qsub - start with #PBS
# -- job name --
#PBS -N openfoam_Serial
# -- email me at the beginning (b) and end (e) of the execution --
#PBS -m be
# -- put stderr and stdout in a single output file
#PBS -j oe
# -- My email address --
# please uncomment the following line and put in your e-mail address,
# if you want to receive e-mail notifications on a non-default address
##PBS -M your_email_address
# -- estimated wall clock time (execution time): hh:mm:ss --
#PBS -l walltime=01:00:00
# -- parallel environment requests --
#PBS -l nodes=1:ppn=1
# -- end of PBS options --

# -- change to working directory --
cd $PBS_O_WORKDIR
# -- load OpenFOAM --
module load OpenFoam/2.3.0/gcc-4.8.3-openmpi

# -- program invocation here --
cd ~/OpenFOAM/${USER}-2.3.0/run/test/laminar 
./Allclean
./Allrun

Save the script with the name you like, for example OpenFOAM_serial.sh and submit it:

qsub OpenFOAM_serial.sh

In this script we ask for only one core on one single node. An explanation of the #PBS options can be found here.

Parallel job

Here is a simple script for parallel OpenFOAM execution.

#!/bin/sh
# embedded options to qsub - start with #PBS
# -- job name --
#PBS -N openfoam_Parallel
# -- email me at the beginning (b) and end (e) of the execution --
#PBS -m be
# -- put stderr and stdout in a single output file
#PBS -j oe
# -- My email address --
# please uncomment the following line and put in your e-mail address,
# if you want to receive e-mail notifications on a non-default address
##PBS -M your_email_address
# -- estimated wall clock time (execution time): hh:mm:ss --
#PBS -l walltime=01:00:00
# -- parallel environment requests --
#PBS -l nodes=1:ppn=8
# -- end of PBS options --

# -- change to working directory --
cd $PBS_O_WORKDIR
# -- load OpenFOAM --
module load OpenFoam/2.3.0/gcc-4.8.3-openmpi

# -- program invocation here -- 
#cd /SCRATCH/username/mycase007 
#mpirun interFOAM -parallel > Output.log

In this script, 8 cores are reserved on one single node:

#PBS -l nodes=1:ppn=8

Notice that there is no need to specify the number of cores as an argument of mpirun because the system OpenMPI installation gets this information directly from the resource manager.

To reserve 16 cores on 2 nodes, 8 on each one,
#PBS -l nodes=2:ppn=8
In this case the scheduler could decide to allocate all 16 processes on a single node.

Note: At present there are 3 different kind of nodes, with 8, 12 and 20 cores per node. It is recommended to completely fill-up a node, before distributing across multiple nodes.

 

Running “BIG” jobs

OpenFOAM produces a lot of small files during the computation, and with a peculiar hierarchy of directories. During a single simulations, many thousands small files are created, read and written, and this puts a lot of pressure on the filesystem, potentially causing substantial performance degradation, and affecting the whole cluster. Following the following advice can help in mitigating these side-effects, and at the same time give better performance.

 

1. Run on SCRATCH filesystem

We recommend the OpenFOAM users to run their BIG jobs directly on the SCRATCH filesystem. Potential advantages are that SCRATCH is connected to the nodes via a faster interconnect than $HOME, and there is no quota limit on the SCRATCH, so even large simulations can be run. SCRATCH is not backed-up, however, and the performance of SCRATCH degrades rapidly when there is not enough free space left.

For these reasons, you are asked to:

  • write an email to support@cc.dtu.dk and ask for a personal directory under SCRATCH (it will be called after your username, i.e. /SCRATCH/username/);
  • read carefully the instructions in /SCRATCH/readme.txt;
  • run the simulation in your /SCRATCH/username/;
    • there is no back-up, so keep a copy of your jobscript- and input-files in your home directory;
    • this is a shared filesystem without quota limit: please clean up the files and directories you don’t need any longer as soon as possible;
  • after the simulation:
    • reconstruct the model, and then remove all the processor## directories;
    • copy the important results back to $HOME (if you do it manually, then please use the dedicated transfer.gbar.dtu.dk machine);
    • remove the original files from SCRATCH.

 

2. Change I/O Settings in your project

The amount of read/write/inspect that a simulation does is controlled by the settings in the controlDict database (file system/controlDict in the main directory of your case, see the OpenFOAM guide).

1. Reduce the checkpoint frequency:

keywords writeControl and writeInterval
Set the variables that control the frequency of writing to file: writeControl and writeInterval keywords.

2. Reduce the number of checkoint files saved during the job executions:

keyword purgeWrite
You can control the number of checkpoints kept during the computations, i.e. you can have high frequency of checkpoint but keep only the most recent, the older ones are deleted. Set the keyword purgeWrite. purgeWrite 0 means that all the checkpoints are kept. purgeWrite n means that only n checkpoints are kept, cyclically overwriting the older ones during the simulation. For steady-state solutions, results from previous iterations can be continuously overwritten by specifying purgeWrite 1. Notice: the default value is 0.

3. Avoid very small time step:

keyword deltaT
Using unnecessarily small time steps affects the performance of your simulations, because at each time step the data need to be synchronized, and a log file entry is written. This book-keeping requires additional time, and puts additional load to the filesystem. It’s beneficial therefore to aim for the largest possible value of the time step, which can be done for example by increasing the number of cells per core.

4. Do not allow on-the-fly modifications:

keyword runTimeModifiable no
By default, OpenFOAM allow the user to modify the problem parameters on the fly. This is a bad idea for many reasons, and this setting should only be used in testing/debugging mode. It has a very bad effect on performance, since forces OpenFOAM to inquire about the properties (stats) of the input files (all the dictionaries) at every time step, putting additionally unnecessary load on the filesystem, and slowing down the computations. Therefore please explicitly set the keyword runTimeModifiable to no.

 

3. Estimating the disk space required

It can be a good idea to estimate the amount of data your job will produce:

  • Create a testjob with a couple of timesteps;
  • then use a
    du -h
    on you subdirectory to find out the size of a single timestep for a single processor and then estimate the amount of data that the full run will produce. If it’s more than 500GB for the complete run then think about your setup again, maybe reducing the frequency of the saved data πŸ™‚

 

4. Reconstructing the model: hint

After running OpenFOAM in parallel you have to reconstruct your model. You can do it via a small batch script, like:

#!/bin/sh
# embedded options to qsub - start with #PBS
# -- job name --
#PBS -N openfoam_Reconstruct
# -- email me at the beginning (b) and end (e) of the execution --
#PBS -m be
# -- put stderr and stdout in a single output file
#PBS -j oe
# -- My email address --
# please uncomment the following line and put in your e-mail address,
# if you want to receive e-mail notifications on a non-default address
##PBS -M your_email_address
# -- estimated wall clock time (execution time): hh:mm:ss --
#PBS -l walltime=08:00:00
# -- parallel environment requests --
#PBS -l nodes=1:ppn=1
# -- end of PBS options --

# -- change to working directory --
cd $PBS_O_WORKDIR
# -- load OpenFOAM --
module load OpenFoam/2.3.0/gcc-4.8.3-openmpi

# -- program invocation here --
# -- if needed,  cd to the directory that contains your processor# directory --
#cd ~/Path_of_your_Case

# -- run model reconstruction --
reconstructPar
ret=$?

# -- remove processor-directories if reconstruction was successful
if [ $ret -eq 0 ] ; then
      for i in processor* ; do rm -rf $i & done
      wait 
fi
exit $ret

Then run this job files as a normal job.
Notice the lines

if $ret -eq 0 ; then
      for i in processor*; do rm -rf $i &; done
      wait 
fi 

that check the returning code of the reconstructPar command to be sure that the reconstruction was successful before start deleting the temporary processor## directories.

Older Versions

You may be interested in using older versions of openFOAM for benchmarking, or for compatibility issues related to the defective backward-compatibility of the new releases. For this reason, the older versions of openFOAM are kept available to the cluster users, as independent modules. After the login, and after moving to one of the app nodes (linuxsh) or the hpc_interactive nodes (qrsh), type

module list

to check which openFOAM modules are available, and load the one that you need. Be aware that you should probably modify the way your model is specified, in order to make it run with different openFOAM versions.