Follow

VASP

Description

VAMP/VASP is a package for performing ab-initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set.

The VASP home page is at http://www.vasp.at.

Available modules

Packages with modules

Module NeSI Cluster
VASP/5.4.1.05Feb16-foss-2015a pan
VASP/5.3.2-intel-2015a-VTST-BEEF pan
VASP/5.3.5-ictce-5.4.0 pan
VASP/5.3.5-intel-2015a-VTST-BEEF pan
VASP/5.3.5-intel-2015a pan
VASP/5.3.5-iomkl-4.6.13-BEEF pan
VASP/5.3.5-iomkl-4.6.13 pan

Licensing requirements

VASP is made available to researchers under commercial licence agreements with individuals, research groups or institutions. Whether you have access to VASP, which versions you have access to, and under what conditions, will vary depending on where you work or study. You will only be permitted to access and use any given version of VASP on any NeSI cluster if you have a valid licence to use that version of VASP at your place of work or study, and if the terms of your licence permit cluster use.

If your institution (or any department or research group within your institution) has a VASP licence, please get in touch with your VASP licensing contact. A list of VASP licensing contacts is given in the following table.

Institution Licensing contact
The University of Auckland Dr Tilo Söhnel
The University of Otago Dr Anna Garden

If your institution is not listed in the table, or if you have spoken to your licensing contact and still have questions regarding your eligibility to access VASP or any particular version of it on a NeSI cluster, please contact our support desk.

Example scripts

Example script for the Pan cluster without NPAR setting

#!/bin/bash -e

#SBATCH --job-name        MyVASPJob
#SBATCH --account         nesi99999
#SBATCH --time            01:00:00
#SBATCH --nodes           4
#SBATCH --ntasks-per-node 12               # One CPU per task is assumed
#SBATCH --mem-per-cpu     8G
#SBATCH --output          MyVASPJob.%j.out # Include the job ID in the names of
#SBATCH --error           MyVASPJob.%j.err # the output and error files

module load VASP/5.3.5-iomkl-4.6.13

# Use the -K switch so that, if any of the VASP processes exits with an
# error, the entire job will stop instead of continuing to waste core
# hours on a defunct run.
srun -K vasp

Example script for the Pan cluster with NPAR setting

#!/bin/bash -e

#SBATCH --job-name        MyVASPJob
#SBATCH --account         nesi99999
#SBATCH --time            01:00:00
#SBATCH --nodes           4
#SBATCH --ntasks-per-node 12               # One CPU per task is assumed
#SBATCH --mem-per-cpu     8G
#SBATCH --output          MyVASPJob.%j.out # Include the job ID in the names of
#SBATCH --error           MyVASPJob.%j.err # the output and error files

module load VASP/5.3.5-iomkl-4.6.13

# Make sure an NPAR line is present
grep -E "^\s*NPAR\s*=" INCAR > /dev/null
grep_result=$?
if [ ${grep_result} -gt 1 ]
then
    echo "An error occurred while searching for NPAR in the INCAR file!" >&2
    exit 2
elif [ ${grep_result} -eq 1 ]
then
    echo "NPAR = $SLURM_NTASKS_PER_NODE" >> INCAR
elif [ ${grep_result} -eq 0 ]
then
    # Set the value of NPAR to the number of VASP tasks per node
    sed –r –i "s/(^\s*NPAR\s*=\s?).*$/\1$SLURM_NTASKS_PER_NODE/" INCAR
fi

# Use the -K switch so that, if any of the VASP processes exits with an
# error, the entire job will stop instead of continuing to waste core
# hours on a defunct run.
srun -K vasp

Further notes

Which VASP module should I use?

Versions

In general, unless you require otherwise for the sake of consistency with earlier work or you rely on a removed feature, we recommend the most recent supported version.

Compiler stack

The VASP developers recommend the use of a VASP executable built with the Intel compiler suite and OpenMPI. We indicate these modules by the string "iomkl" in the module name. If VASP built against OpenMPI causes problems for you, you may wish to try VASP built with Intel MPI instead of OpenMPI; such modules have "ictce" or "intel" in the module name.

VTST

Any VASP module with "VTST" in its name has been modified to include the VASP Transition State Tools, a third-party package for finding transition states and computing rate constants. As it is necessary to modify the VASP code to make it compatible with VTST, we recommend not using VTST-enabled VASP unless your research requires it.

BEEF

Any VASP module with "BEEF" in its name offers VASP executables linked against BEEF, the Bayesian Error Estimation Functionals. As with VTST, we recommend using standard instead of BEEF-enabled VASP unless your research calls for the use of BEEF.

Which VASP executable should I use?

VASP is unusual among scientific software packages in that some of its execution options are controlled neither by the nature of the input data, nor by command line flags, but by the executable itself. We offer a range of VASP executables, each built with a different set of compile-time options so that the resulting binary is optimised for a particular sort of problem.

The different VASP executables we offer in each module are as follows:

Name Description
vasp The most demanding VASP executable, suitable for non-collinear calculations (i.e., with spin-orbit coupling)
vasp_cd A VASP executable with intermediate memory demands, suitable for collinear calculations without spin-orbit coupling
vasp_gamma A VASP executable with low memory demands, suitable for gamma-point calculations
vasp_md_cd Like `vasp_cd`, but with molecular dynamics support included
vasp_md_gamma Like `vasp_gamma`, but with molecular dynamics support included

How should I configure my VASP input to run in parallel?

As implied by the example job submission script above, we recommend requesting the same number of cores on all involved nodes, preferably by using the --nodes and --ntasks-per-node options.

In your INCAR file, the most important parameters are the NPAR and NCORE parameters. They are inversely proportional to each other (NCORE is the total number of CPU cores divided by NPAR, and vice versa), but if both are set in your INCAR file, NPAR prevails over NCORE.

We therefore recommend using the $SLURM_NTASKS_PER_NODE environment variable to set the value of NPAR. An example script that does this automatically at run time is provided above.

For more information regarding NCORE, NPAR, and other VASP settings that influence its efficiency when run in parallel, please see the VASP notes on the subject.

Comments

Powered by Zendesk