A list of commands can be found with:

abaqus help

Hyperthreading can provide significant speedup to your computations, however hyperthreaded CPUs will use twice the number of licence tokens. It may be worth adding  #SBATCH --hint nomultithread to your slurm script if licence tokens are your main limiting factor.


Required ABAQUS licences can be determined by this simple and intuitive formula ⌊ 5 x N0.422 where N is number of CPUs.

You can force ABAQUS to use a specific licence type by setting the parameter academic=TEACHING or academic=RESEARCH in a relevant environment file.

Solver Compatibility

Not all solvers are compatible with all types of parallelisation.

  Element operations Iterative solver Direct solver Lanczos solver


If your input files were created using an older version of ABAQUS you will need to update them using the command,

abaqus -upgrade -job new_job_name -odb old.odb


abaqus -upgrade -job new_job_name -inp old.inp


For when only one CPU is required, generally as part of an job array.


#!/bin/bash -e

#SBATCH --job-name      ABAQUS-Shared
#SBATCH --time 00:05:00 # Walltime #SBATCH --cpus-per-task 1 #SBATCH --mem 1500 # total mem module load ABAQUS/2019
abaqus job="propeller_s4rs_c3d8r" verbose=2 interactive

Shared Memory


Uses a nodes shared memory for communication. 

May have a small speedup compared to MPI when using a low number of CPUs, scales poorly. Needs significantly less memory than MPI.

Hyperthreading may be enabled if using shared memory but it is not recommended.
#!/bin/bash -e

#SBATCH --job-name      ABAQUS-Shared
#SBATCH --time 00:05:00 # Walltime #SBATCH --cpus-per-task 4 #SBATCH --mem 2G # total mem module load ABAQUS/2019
abaqus job="propeller_s4rs_c3d8r" verbose=2 interactive \
cpus=${SLURM_CPUS_PER_TASK} mp_mode=threads


Shared memory run with user defined function (fortran or C). 


Function will be compiled at start of run. 

You may need to chance the function suffix if you usually compile on windows.

#!/bin/bash -e

#SBATCH --job-name      ABAQUS-SharedUDF
#SBATCH --time 00:05:00 # Walltime #SBATCH --cpus-per-task 4 #SBATCH --mem 2G # total mem
module load imkl module load ABAQUS/2019
abaqus job="propeller_s4rs_c3d8r" user=my_udf.f90 verbose=2 interactive \
cpus=${SLURM_CPUS_PER_TASK} mp_mode=threads

Distributed Memory


Multiple processes each with a single thread.

Not limited to one node.
Model will be segmented into -np pieces which should be equal to --ntasks.

Each task could be running on a different node leading to increased communication overhead
.Jobs can be limited to a single node by adding  --nodes=1 however this will increase your time in the queue as contiguous cpu's are harder to schedule.

This is the default method if mp_mode is left unspecified.

#!/bin/bash -e

#SBATCH --job-name      ABAQUS-Distributed 
#SBATCH --time 00:05:00 # Walltime #SBATCH --ntasks 8 #SBATCH --mem-per-cpu 1500 # Each CPU needs it's own.
#SBATCH --nodes 1 module load ABAQUS/2019
abaqus job="propeller_s4rs_c3d8r" verbose=2 interactive \
cpus=${SLURM_NTASKS} mp_mode=mpi


The GPU nodes are limited to 16 CPUs

In order for the GPUs to be worthwhile, you should see a speedup equivalent to 56 CPU's per GPU used. GPU modes will generally have less memory/cpus

#!/bin/bash -e

#SBATCH --job-name      ABAQUS-gpu
#SBATCH --time 00:05:00 # Walltime #SBATCH --cpus-per-task 4 #SBATCH --mem 4G # total mem
#SBATCH --gpus-per-node 1 module load ABAQUS/2019
module load CUDA
abaqus job="propeller_s4rs_c3d8r" verbose=2 interactive \
cpus=${SLURM_CPUS_PER_TASK} gpus=${SLURM_GPUS_PER_NODE} mp_mode=threads

User Defined Functions 

User defined functions (UDFs) can be included on the command line with the argument user=<filename> where <filename> is the C or fortran source code.

Extra compiler options can be set in your local abaqus_v6.env file.

The default compile commands are for imkl, other compilers can be loaded with module load, you may have to change the compile commands in your local .env file.

Environment file

The ABAQUS environment file contains a number of parameters that define how the your job will run, some of these you may with to change.

These parameters are read, 

../ABAQUS/SMA/site/abaqus_v6.env Set by NeSI and cannot be changed.

~/abaqus_v6.env (your home directory) If exists, will be used in all jobs submitted by you.

<working directory>/abaqus_v6.env If exists, will used in this job only.

You may want to include this short snippet when making changes specific to a job.

# Before starting abaqus
echo "parameter=value
parameter=value" > "abaqus_v6.env"

# After job is finished.
rm "abaqus_v6.env"


Useful Links




Note: Hyperthreading off, testing done on small mechanical FEA model. Results highly model dependant. Do your own tests.

Labels: mahuika engineering gpu mpi omp
Was this article helpful?
1 out of 2 found this helpful