GROMACS

GROMACS

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

Licensing Terms and Conditions

Gromacs is a joint effort, with contributions from developers around the world: users agree to acknowledge use of GROMACS in any reports or publications of results obtained with the Software (see GROMACS Homepage for details).

Setup

module load GROMACS

 

Job submission

GROMACS performance depends on several factors, such as the usage (or the lack) of GPUs, the number of MPI tasks and OpenMP threads, the load balancing algorithm, the ratio between the number of Particle-Particle (PP) ranks and Particle-Mesh-Ewald (PME) ranks, the type of simulation being performed, force field used and of course the simulated system. For a complete set of GROMACS options, please refer to GROMACS documentation.

The following job script is just an example and asks for 5 MPI tasks and 3 OpenMP threads per MPI task. Please try other mdrun flags in order to see if they make your simulation run faster. Examples of such flags are -npme, -dlb, -ntomp. If you use more MPI tasks per node you will have less memory per MPI task. If you use multiple MPI tasks per node, you need to set CRAY_CUDA_MPS=1 to enable the tasks to access the GPU device on each node at the same time.

#!/bin/bash -e
#SBATCH --job-name GROMACS_test #Name to appear in squeue
#SBATCH --time 00:10:00 #Max walltime
#SBATCH --mem-per-cpu 1500 #Max memory per logical core
#SBATCH --ntasks=5 #5 MPI tasks
#SBATCH --cpus-per-task=3 #3 OpenMP threads per task
#SBATCH --output=%x_out.log #Location of output log
#SBATCH --error=%x_error.err #Location of error log

module load GROMACS
#OR specific version with;
#module load GROMACS/5.1.4-intel-2017a

#Job run
srun gmx_mpi mdrun -v -deffnm protein-EM-vacuum -c input/protein.gr

 

Note: To prevent performance issues we moved the serial "gmx" to "gmx_serial". The present "gmx" prints a note and calls "gmx_mpi mdrun" (if called as "gmx mdrun") and "gmx_serial" in all other cases.

Note: The hybrid version with cuda can also run on pure CPU architectures. Thus you can use gmx_mpi from the GROMACS/???-cuda-???-hybrid module on Mahuika compute nodes as well as Mahuika GPU nodes.

Further Documentation

GROMACS Homepage

GROMACS Manual

 

Was this article helpful?
0 out of 0 found this helpful