Follow

ANSYS

Description

ANSYS is finite element analysis software enabling organizations to confidently predict how their products will operate in the real world.

The ANSYS home page is at http://www.ansys.com.

Available modules

Packages with modules

Module NeSI Cluster
ANSYS/17.0 pan
ANSYS/UoA-ABI-15.0 pan
ANSYS/15.0 pan
ANSYS/16.0 pan

Licensing requirements

ANSYS is made available to research groups, departments and institutions under the terms of closed-source, commercial licence agreements. If you have any questions regarding your eligibility to access this software package or any particular version of it, please contact our support desk.

Example scripts

Example script for the Pan cluster

ANSYS CFX

#!/bin/bash -e

#SBATCH --job-name      CFX
#SBATCH --account       nesi99999             # Project Account
#SBATCH --time          01:00:00              # Walltime
#SBATCH --ntasks        64                    # Number of CPUS to use
#SBATCH --mem-per-cpu   7G                    # Memory per CPU
#SBATCH --licenses      ansys_hpc:48          # One licence token per CPU, less 16
#SBATCH --output        CFX_%j.out            # Include the job ID in the names
#SBATCH --error         CFX_%j.err            # of the output and error files

module load ANSYS/17.0

input=/share/test/ansys/cfx/case4/TransPressureBC_VariableSurfTemp_IdealGases_Mask.def

cfx5solve -batch -single -def "$input" -parallel -part $SLURM_NTASKS

ANSYS Fluent SMP (one node) using GPUs

#!/bin/bash -e

#SBATCH --job-name      Fluent_GPU
#SBATCH --account       nesi99999             # Project Account
#SBATCH --time          01:00:00              # Walltime
#SBATCH --cpus-per-task 16                    # Number of CPUs to use
#SBATCH --mem-per-cpu   4G                    # memory per CPU
#SBATCH --gres          gpu:2                 # Number of GPUs to use
#SBATCH --licenses      ansys_hpc:2           # One licence token per CPU/GPU, less 16
#SBATCH --output        Fluent_GPU_job.%j.out # Include the job ID in the names
#SBATCH --error         Fluent_GPU_job.%j.err # of the output and error files

module load ANSYS/17.0

ln -s /share/test/ansys/fluent/case1/input/{fluent.in,testCase.cas,testCase.dat} ./

# Use one of the -v options 2d, 2ddp, 3d, or 3ddp
fluent -v3ddp -g -t$SLURM_CPUS_PER_TASK -gpgpu=2 -i fluent.in

ANSYS Fluent MPI (multiple nodes) without GPUs

#!/bin/bash -e

#SBATCH --job-name      Fluent_MPI_job
#SBATCH --account       nesi99999             # Project account
#SBATCH --time          01:03:00              # Wall time
#SBATCH --ntasks        24                    # Number of CPUs to use
#SBATCH --mem-per-cpu   4G                    # Memory per CPU
#SBATCH --licenses      ansys_hpc:8           # One licence token per CPU, less 16
#SBATCH --output        Fluent_MPI_job.%j.out # Include the job ID in the names 
#SBATCH --error         Fluent_MPI_job.%j.err # of the output and error files

module load ANSYS/17.0
export MALLOC_CHECK_=1   # Avoid some Platform MPI memory errors.

ln -s /share/test/ansys/fluent/case1/input/{fluent.in,testCase.cas,testCase.dat} ./

# Use one of the -v options 2d, 2ddp, 3d, or 3ddp
fluent -v3ddp -g -slurm -mpi=pcmpi -t$SLURM_NTASKS -pib -i fluent.in
# Try the following command instead if your job fails with the
# "-slurm -mpi=pcmpi" arguments
#fluent -v3ddp -g -mpi=openmpi -t$SLURM_NTASKS -pib -i fluent.in

If the "-slurm -mpi=pcmpi" way of running Fluent fails on your model(s), please try the -mpi=openmpi alternative and let us know how it goes. We are interested to know which of these is the most reliable.

ANSYS LS-DYNA

#!/bin/bash -e

#SBATCH --job-name      LS-DYNA
#SBATCH --account       nesi99999             # Project Account
#SBATCH --time          01:00:00              # Walltime
#SBATCH --ntasks        16                    # Number of CPUs to use
#SBATCH --mem-per-cpu   4G                    # Memory per cpu
#SBATCH --output        LS-DYNA_job.%j.out    # Include the job ID in the names
#SBATCH --error         LS-DYNA_job.%j.err    # of the output and error files

module load ANSYS/15.0

input=/share/test/lsdyna/3cars/3cars_shell2_150ms.k

lsdyna150 i="$input" memory=$(($SLURM_MEM_PER_CPU/8))M -dis -np $SLURM_NTASKS

ANSYS Mechanical

#!/bin/bash -e

#SBATCH --job-name      ANSYS
#SBATCH --account       nesi99999             # Project Account
#SBATCH --time          00:05:00              # Walltime
#SBATCH --ntasks        4                     # Number of CPUs to use
#SBATCH --mem-per-cpu   4G                    # Memory per cpu
#SBATCH --output        ANSYS_job.%j.out      # Include the job ID in the names
#SBATCH --error         ANSYS_job.%j.err      # of the output and error files

module load ANSYS/17.0

input=/share/test/ansys/mechanical/structural.dat

ansys170 -b -dis -mpi intel -np $SLURM_NTASKS -i "$input"

ANSYS Fluid-Structure Interaction

#!/bin/bash -e

#SBATCH --job-name      ANSYS_FSI
#SBATCH --account       nesi99999             # Project Account
#SBATCH --time          01:00:00              # Walltime
#SBATCH --ntasks        16                    # Number of CPUs to use
#SBATCH --mem-per-cpu   4GB                   # Memory per CPU
#SBATCH --output        FSI_job.%j.out        # Include the job ID in the names
#SBATCH --error         FSI_job.%j.err        # of the output and error files

module load ANSYS/17.0

COMP_CPUS=$((SLURM_NTASKS-1))
MECHANICAL_CPUS=1
FLUID_CPUS=$((COMP_CPUS-MECHANICAL_CPUS))
export SLURM_EXCLUSIVE="" # don't share CPUs
echo "CPUs: Coupler:1 Struct:$MECHANICAL_CPUS Fluid:$FLUID_CPUS"

echo "STARTING SYSTEM COUPLER"

cd Coupling

# Run the system coupler in the background.
srun -N1 -n1 $WORKBENCH_CMD \
    ansys.services.systemcoupling.exe \
    -inputFile coupling.sci || scancel $SLURM_JOBID &
cd ..
serverfile="$PWD/Coupling/scServer.scs"

while [[ ! -f "$serverfile" ]] ; do
    sleep 1 # waiting for SC to start
done
sleep 1

echo "PARSING SYSTEM COUPLER CONFIG"

{
    read hostport
    port=${hostport%@*}
    node=${hostport#*@}
    read count
    for solver in $(seq $count)
    do
        read solname
        read soltype
        case $soltype in 
            Fluid) fluentsolname=$solname;;
            Structural) mechsolname=$solname;;
        esac
    done
} < "$serverfile"

echo " Port number: $port"
echo " Node name: $node"
echo " Fluent name: $fluentsolname"
echo " Mechanical name: $mechsolname"

echo "STARTING ANSYS"

cd Structural

# Run ANSYS in the background, alongside the system coupler and Fluent.
ansys170 -b -dis -mpi intel -np $MECHANICAL_CPUS \
    -scport $port -schost $node -scname "$mechsolname" \
    -i "structural.dat" > struct.out || scancel $SLURM_JOBID &
cd ..

sleep 2
echo "STARTING FLUENT"

cd FluidFlow

export MALLOC_CHECK_=1     # Avoid some Platform MPI memory errors in Fluent.

# Run Fluent in the background, alongside the system coupler and ANSYS.
fluent 3ddp -g -mpi=pcmpi -pib -slurm -t$FLUID_CPUS \
    -scport=$port -schost=$node -scname="$fluentsolname" \
    -i "fluidFlow.jou" > fluent.out || scancel $SLURM_JOBID &
cd ..

# Before exiting, wait for all background tasks (the system coupler, ANSYS and
# Fluent) to complete.
wait

Best Practices

Licence counts

ANSYS licensing has many dimensions, but the most likely aspect to reach its limit on the cluster is the HPC licence. Specifically, there is a limited number of HPC licence tokens available, and both FLUENT and CFX jobs require one token per processor (CPU or GPU) after the first 16 processors. By providing the sbatch option --licenses with the right setting (as in several of the example scripts above), you will reserve a number of HPC licence tokens for your job, and so your job is (almost) guaranteed that when it starts the necessary licence tokens will be available.

GPU acceleration support

GPUs can be slow for smaller jobs because it takes time to transfer data from the main memory to the GPU memory. We therefore suggest that you only use them for larger jobs, unless benchmarking reveals otherwise.

IO bottleneck

We have seen intensive I/O generated by some ANSYS applications. The current cluster file system performs very slowly under heavy I/O load, and ANSYS jobs are often among those affected. Such slowdowns can be avoided by using a local temporary file system. The Slurm workload manager creates a temporary folder on the local disk for each job. If your job runs on one node, we suggest that you transfer your input files to that temporary folder, the path to which is saved in the $TMP_DIR environment variable, run the analysis there, and transfer the results to your starting directory at the end of the job. This workflow is demonstrated in the following example script:

#!/bin/bash -e

# The -e option (above) tells the job to exit immediately with a useful exit
# status if the main command fails. Otherwise, the command to copy the output
# might succeed and hide the fact that the main command has failed.

# Get absolute path to the input file since we will be moving:
input=$(readlink -f "$input")

# If this job fits on one node then we can use the local disk $TMP_DIR
# as the working directory, otherwise we can use the shared $SCRATCH_DIR.
if [[ $SLURM_JOB_NUM_NODES -gt 1 ]]
then
    cd $SCRATCH_DIR
else
    cd $TMP_DIR
fi

# Get the input file.  For some programs a symlink won't do, or 
# the input must be in the working directory, so the safest way is:
cp $input ./
input=$(basename $input)

# Your ANSYS command line:
<main command using $input>

# Copy the result file(s) back out
rm -rfv $input
cp -arv --no-preserve=mode * $OLDPWD

Comments

Powered by Zendesk