OpenFOAM

OpenFOAM (Open Field Operation And Manipulation) is a open-source C++ toolbox maintained by the OpenFOAM foundation and ESI Group. Although primarily used for CFD (Computational Fluid Dynamics) OpenFOAM can be used in a wide range of fields from solid mechanics to chemistry. 

The lack of licence limitations and automatic parallelisation make OpenFOAM well suited for a HPC environment.

OpenFOAM can be loaded using;

module load OpenFOAM
source $FOAM_BASH

Example Script

#!/bin/bash -e
#=================================================#
#title          OpenFoamTest.sl
#description    Run simpleFoam in paralell.
#author         NeSI
#=================================================#
#SBATCH --time			    04:00:00
#SBATCH --job-name		    OF_16CORES
#SBATCH --output		    %x.output   #set output to job name
#SBATCH --ntasks		    16
#SBATCH --mem-per-cpu	            1500        #Standard memory for one cpu (/MB).
#=================================================#

#Working directory always needs to contain 'system', 'constant', and '0' DIR_WORKING="/nesi/nobackup/nesi99999/OpenFOAM/testRun"
#Add this script to start of output
cat ${0}
cd ${DIR_WORKING} module load OpenFOAM/v1712-gimkl-2017a source ${FOAM_BASH} decomposePar #Break domain into pieces for parallel execution. srun simpleFoam -parallel reconstructPar -latestTime #Collect

Filesystem Limitations

OpenFOAM generates a large number of files during run-time. In addition to the I/O load there is also the danger of using up available inodes.

Filesystems in excess of their allocation will cause any job trying to write there to crash.

There are a few ways to mitigate this

  • Use /nesi/nobackup
    The nobackup directory has a significantly higher inode count and no disk space limits.
  • ControlDict Settings
    • WriteInterval
      Using a high write interval reduce number of output files and I/O load.
    • deltaT
      Consider carefully an appropriate time-step, use adjustTimeStep if suitable.
    • purgeWrite
      Not applicable for many jobs, this keeps only the last n steps, e.g. purgeWrite 5 will keep the last 5 time-steps, with the directories being constantly overwritten.
    • runTimeModifiable
      When true, dictionaries will be re-read at the start of every time step. Setting this to false will decrease I/O load.
    • writeFormat
      Setting this to binary as opposed to ascii will decrease disk use and I/O load.
  • Monitor Filesystem 
    The command nn_check_quota should be used to track filesystem usage. There is a delay between making changes to a filesystem and seeing it on nn_check_quota.
    Filesystem         Available      Used     Use%     Inodes     IUsed     IUse%
    home_cwal219 20G 1.957G 9.79% 92160 21052 22.84%
    project_nesi99999 2T 798G 38.96% 100000 66951 66.95%
    nobackup_nesi99999 6.833T 10000000 2691383 26.91%
  • Contact Support
    If you are following the recommendations here yet are still concerned about indoes, open a support ticket and we can raise the limit for you.

Environment Variables

You may find it useful to use environment variables in your dictionaries e.g.

numberOfSubdomains ${SLURM_NTASKS};

Or create your variables to be set in your Slurm script.

startFrom ${START_TIME};

 This is essential when running parameter sweeps.

 

Labels: mahuika engineering cfd
Was this article helpful?
0 out of 0 found this helpful