ANSYS

Description

The ANSYS package is a commercial bundle of engineering software which includes the ANSYS finite element analysis software together with other components such as Fluent and CFX for computational fluid dynamics.

The ANSYS home page is at http://www.ansys.com.

Availability

ANSYS is presently installed on the Pan and Mahuika clusters. 

Licensing requirements

ANSYS is made available to research groups, departments and institutions under the terms of closed-source, commercial licence agreements. If you have any questions regarding your eligibility to access this software package or any particular version of it, please contact our support desk.  See also the section below about license counts.

Example scripts

ANSYS CFX

#!/bin/bash -e
#SBATCH --job-name      CFX
#SBATCH --account       nesi99999         # Project Account
#SBATCH --time          01:00:00          # Walltime
#SBATCH --ntasks        64                # Number of CPUS to use
#SBATCH --mem-per-cpu   3G                # Memory per CPU
#SBATCH --licenses      ansys_hpc:48      # One licence token per CPU, less 16
#SBATCH --output        CFX_%j.out        # Include the job ID in the names
#SBATCH --error         CFX_%j.err        # of the output and error files

module load ANSYS/18.1
input=TransPressureBC_VariableSurfTemp_IdealGases_Mask.def
cfx5solve -batch -single -def "$input" -parallel -part $SLURM_NTASKS

ANSYS Fluent SMP (one node) using GPUs

#!/bin/bash -e
#SBATCH --job-name      Fluent_GPU
#SBATCH --account       nesi99999         # Project Account
#SBATCH --time          01:00:00          # Walltime
#SBATCH --cpus-per-task 16                # Number of CPUs to use
#SBATCH --mem-per-cpu   3G                # memory per CPU
#SBATCH --gres          gpu:2             # Number of GPUs to use
#SBATCH --partition gpu #SBATCH --licenses ansys_hpc:2 # One licence token per CPU/GPU, less 16 #SBATCH --output Fluent_GPU.%j.out # Include the job ID in the names #SBATCH --error Fluent_GPU.%j.err # of the output and error files module load ANSYS/18.1
JOURNAL_FILE=fluent.$SLURM_JOB_ID.in
cat <<EndOfJournalFile > $JOURNAL_FILE
rcd testCase.cas
/solve/dual-time-iterate 10
/file/write-case-data cdfiner.out.cas
exit
EndOfJournalFile
# Use one of the -v options 2d, 2ddp, 3d, or 3ddp fluent -v3ddp -g -t$SLURM_CPUS_PER_TASK -gpgpu=2 -i $JOURNAL_FILE

ANSYS Fluent MPI (multiple nodes) without GPUs

#!/bin/bash -e

#SBATCH --job-name      Fluent_MPI_job
#SBATCH --account       nesi99999         # Project account
#SBATCH --time          01:03:00          # Wall time
#SBATCH --ntasks        24                # Number of CPUs to use
#SBATCH --mem-per-cpu   3G                # Memory per CPU
#SBATCH --licenses      ansys_hpc:8       # One licence token per CPU, less 16
#SBATCH --output        Fluent_MPI.%j.out # Include the job ID in the names 
#SBATCH --error         Fluent_MPI.%j.err # of the output and error files

module load ANSYS/18.1

JOURNAL_FILE=fluent.$SLURM_JOB_ID.in
cat <<EndOfJournalFile > $JOURNAL_FILE
rcd testCase.cas
/solve/dual-time-iterate 10
/file/write-case-data cdfiner.out.cas
exit
EndOfJournalFile
# Use one of the -v options 2d, 2ddp, 3d, or 3ddp fluent -v3ddp -g -mpi=openmpi -t$SLURM_NTASKS -pib -i $JOURNAL_FILE

ANSYS LS-DYNA

#!/bin/bash -e
#SBATCH --job-name      LS-DYNA
#SBATCH --account       nesi99999         # Project Account
#SBATCH --time          01:00:00          # Walltime
#SBATCH --ntasks        16                # Number of CPUs to use
#SBATCH --mem-per-cpu   3G                # Memory per cpu
#SBATCH --output        LS-DYNA.%j.out    # Include the job ID in the names
#SBATCH --error         LS-DYNA.%j.err    # of the output and error files

module load ANSYS/18.1
input=3cars_shell2_150ms.k
lsdyna i="$input" memory=$(($SLURM_MEM_PER_CPU/8))M -dis -np $SLURM_NTASKS

ANSYS Mechanical APDL

#!/bin/bash -e

#SBATCH --job-name      ANSYS
#SBATCH --account       nesi99999         # Project Account
#SBATCH --time          00:05:00          # Walltime
#SBATCH --ntasks        4                 # Number of CPUs to use
#SBATCH --mem-per-cpu   3G                # Memory per cpu
#SBATCH --output        ANSYS_job.%j.out  # Include the job ID in the names
#SBATCH --error         ANSYS_job.%j.err  # of the output and error files

module load ANSYS/18.1
input=/share/test/ansys/mechanical/structural.dat
mapdl -b -dis -np $SLURM_NTASKS -i "$input"

ANSYS Fluid-Structure Interaction

#!/bin/bash -e
#SBATCH --job-name      ANSYS_FSI
#SBATCH --account       nesi99999         # Project Account
#SBATCH --time          01:00:00          # Walltime
#SBATCH --ntasks        16                # Number of CPUs to use
#SBATCH --mem-per-cpu   3GB               # Memory per CPU
#SBATCH --output        FSI_job.%j.out    # Include the job ID in the names
#SBATCH --error         FSI_job.%j.err    # of the output and error files

module load ANSYS/18.1

COMP_CPUS=$((SLURM_NTASKS-1))
MECHANICAL_CPUS=1
FLUID_CPUS=$((COMP_CPUS-MECHANICAL_CPUS))
export SLURM_EXCLUSIVE="" # don't share CPUs
echo "CPUs: Coupler:1 Struct:$MECHANICAL_CPUS Fluid:$FLUID_CPUS"

echo "STARTING SYSTEM COUPLER"

cd Coupling

# Run the system coupler in the background.
srun -N1 -n1 $WORKBENCH_CMD \
    ansys.services.systemcoupling.exe \
    -inputFile coupling.sci || scancel $SLURM_JOBID &
cd ..
serverfile="$PWD/Coupling/scServer.scs"

while [[ ! -f "$serverfile" ]] ; do
    sleep 1 # waiting for SC to start
done
sleep 1

echo "PARSING SYSTEM COUPLER CONFIG"

{
    read hostport
    port=${hostport%@*}
    node=${hostport#*@}
    read count
    for solver in $(seq $count)
    do
        read solname
        read soltype
        case $soltype in 
            Fluid) fluentsolname=$solname;;
            Structural) mechsolname=$solname;;
        esac
    done
} < "$serverfile"

echo " Port number: $port"
echo " Node name: $node"
echo " Fluent name: $fluentsolname"
echo " Mechanical name: $mechsolname"

echo "STARTING ANSYS"

cd Structural

# Run ANSYS in the background, alongside the system coupler and Fluent.
mapdl -b -dis -mpi intel -np $MECHANICAL_CPUS \
    -scport $port -schost $node -scname "$mechsolname" \
    -i "structural.dat" > struct.out || scancel $SLURM_JOBID &
cd ..

sleep 2
echo "STARTING FLUENT"

cd FluidFlow

# Run Fluent in the background, alongside the system coupler and ANSYS.
fluent 3ddp -g -mpi=openmpi -pib -t$FLUID_CPUS \
    -scport=$port -schost=$node -scname="$fluentsolname" \
    -i "fluidFlow.jou" > fluent.out || scancel $SLURM_JOBID &
cd ..

# Before exiting, wait for all background tasks (the system coupler, ANSYS and
# Fluent) to complete.
wait

Best Practices

Licence counts

ANSYS licensing has many dimensions, but the one most likely to reach its limit on the cluster is the HPC licence. Specifically, there is a limited number of HPC licence tokens available, and both FLUENT and CFX jobs require one token per processor (CPU core or GPU) after the first 16 processors. You should use the sbatch option --licenses (as in several of the example scripts above) to indicate how many of these HPC licence tokens your ANSYS CFD jobs require, then Slurm will not start your job until it believes that there are enough tokens free.  This is not foolproof as non-Slurm jobs which consume the same pool of licenses can start without notice, but it greatly reduces the risk of jobs failing for lack of license tokens.

GPU acceleration support

GPUs can be slow for smaller jobs because it takes time to transfer data from the main memory to the GPU memory. We therefore suggest that you only use them for larger jobs, unless benchmarking reveals otherwise.

Interactive use

It is best to use journal files etc to automate ANSYS so that you can submit batch jobs, but when interactivity is really needed alongside more CPU power and/or memory than is reasonable to take from a login node (maybe postprocessing a large output file) then an alternative which may work is to run the GUI frontend on a login node while the MPI tasks it launches run on a compute node. This requires using salloc instead of sbatch, for example:

salloc -A nesi99999 -t 30 -n 16 -C avx --mem-per-cpu=2G bash -c 'module load ANSYS; fluent -v3ddp -mpi=openmpi -t$SLURM_NTASKS -pib' 

As with any job, you may have to wait a while before the resource is granted and you can begin, so you might want to use the --mail-type=BEGIN and mail-user= options.

Alternative MPI choices for large distributed jobs

If the -mpi=openmpi way of running Fluent fails on your model(s), please try -slurm -mpi=ibmmpi (for ANSYS 18) or -slurm -mpi=pcmpi (for ANSYS 17 or earlier) and let us know how it goes. We are interested to know which of these options is the most reliable.  Setting MALLOC_CHECK_=1 may also help ibmmpi to work.

The other ANSYS applications default to using Intel MPI (as of ANSYS 18) and this works, but only because we intercept the use of mpirun and call srun instead. If this gives any problems then please let us know. For mapdl it can be sidestepped by selecting IBM MPI with the option -mpi ibmmpi.


Labels: pan mahuika tier1 engineering
Was this article helpful?
0 out of 0 found this helpful