ABAQUS

Abaqus Unified FEA (widely known as ABAQUS) is Finite Element Analysis software for modelling, visualisation and best-in-class implicit and explicit dynamics FEA. ABAQUS can make use of Python for custom scripting.

ABAQUS can be loaded using the command:

module load ABAQUS

A list of commands can be found with:

abaqus help

Example scripts

Serial


For when only one CPU is required, generally as part of an job array.

 

#!/bin/bash -e

#SBATCH --job-name      ABAQUS-Shared
#SBATCH --time 00:05:00 # Walltime #SBATCH --cpus-per-task 1 #SBATCH --mem 1500 # total mem #SBATCH --hint nomultithread # Hyperthreading disabled module load ABAQUS/2018
abaqus job="propeller_s4rs_c3d8r" verbose=2 interactive

Shared Memory


Uses a nodes shared memory for communication. 

May have a small speedup compared to MPI when using a low number of CPUs, scales poorly. Needs significantly less memory than MPI.

Hyperthreading may be enabled if using shared memory but it is not recommended.
#!/bin/bash -e

#SBATCH --job-name      ABAQUS-Shared
#SBATCH --time 00:05:00 # Walltime #SBATCH --cpus-per-task 4 # #SBATCH --mem 20G # total mem #SBATCH --hint nomultithread # Hyperthreading disabled module load ABAQUS/2018
abaqus job="propeller_s4rs_c3d8r" verbose=2 interactive cpus=${SLURM_CPUS_PER_TASK} mp_mode=threads

Distributed Memory 


Multiple processes each with a single thread.

Not limited to one node.
Model will be segmented into -np pieces which should be equal to --ntasks.

Each task could be running on a different node leading to increased communication overhead
.Jobs can be limited to a single node by adding  --nodes=1 however this will increase your time in the queue as contiguous cpu's are harder to schedule.

This is the default method if mp_mode is left unspecified.

#!/bin/bash -e

#SBATCH --job-name      ABAQUS-Distributed 
#SBATCH --time 00:05:00 # Walltime #SBATCH --ntasks 8 # #SBATCH --mem-per-cpu 1500 # Each CPU needs it's own memory. #SBATCH --hint nomultithread # Hyperthreading disabled
#SBATCH --nodes 1 module load ABAQUS/2018
abaqus job="propeller_s4rs_c3d8r" verbose=2 interactive cpus=${SLURM_NTASKS} mp_mode=mpi

Shared Memory + GPUs


Must run on a GPU node.

The GPU nodes are limited to 16 CPUs

In order for the GPUs to be worthwhile, you should see a speedup equivalent to 56 CPU's per GPU used.

#!/bin/bash -e

#SBATCH --job-name      ABAQUS-Shared
#SBATCH --time 00:05:00 # Walltime #SBATCH --cpus-per-task 4 # #SBATCH --mem 20G # total mem #SBATCH --hint nomultithread # Hyperthreading disabled
#SBATCH --partition gpu
#SBATCH --gres gpu:2 module load ABAQUS/2018
abaqus job="propeller_s4rs_c3d8r" verbose=2 interactive cpus=${SLURM_CPUS_PER_TASK} gpus=2 mp_mode=threads

Useful Links

ABAQUS_speedup_SharedVMPI.png

Note: Hyperthreading off, testing done on small mechanical FEA model. Results highly model dependant. Do your own tests.

Labels: mahuika application tier2 engineering
Was this article helpful?
1 out of 2 found this helpful