Updated 2023-04-28

Gromacs

License

Gromacs on PACE uses the Georgia Tech license, for which an annual access fee is required per user. Visit documentation from CoE software for more information about access.

Overview

GROMACS (GROningen MAchine for Chemical Simulations) is a molecular dynamics package primarily designed for simulations of proteins, lipids and nucleic acids. It was originally developed in the Biophysical Chemistry department of University of Groningen, and is now maintained by contributors in universities and research centers across the world. GROMACS is one of the fastest and most popular software packages available and can run on CPUs as well as GPUs. It is free, open source released under the GNU General Public License. Starting from version 4.6, GROMACS is released under the GNU Lesser General Public License.

Running Gromacs Interactivley

Allocating Resources

  • In order to run Gromacs interactivley we can use the salloc command to specify the account, partitions, time, and queue

  • Here is an example of an salloc command you can use: salloc -A [Account] -N 1 -n 8 -t 15 -q embers

  • This will allocate the proper resources to run Gromacs

Using In Application

  • The following example will show running Gromacs interactively using a example with benchmarksl

  • You can download a benchmark example here to run test gromacs wget https://ftp.gromacs.org/pub/benchmarks/water_GMX50_bare.tar.gz

  • Navigate to directory 3072 cd water-cut1.0_GMX50_bare/3072 you will see 4 input files conf.gro pme.mdp rf.mdp topol.top

  • Generate the topol.tpr binary file using the command module load apps/gromacs/2020/cpu;gmx_mpi grompp -f pme.mdp -c conf.gro -p topol.top you will see 4 input files conf.gro pme.mdp rf.mdp topol.top

  • Load module: module load gromacs

  • Run the following srun command srun gmx_mpi grompp -f pme.mdp -c conf.gro -p topol.top

From running this command you should expect this output:


                  :-) GROMACS - gmx mdrun, 2020-UNCHECKED (-:


                             GROMACS is written by:

     Emile Apol      Rossen Apostolov      Paul Bauer     Herman J.C. Berendsen

    Par Bjelkmar      Christian Blau   Viacheslav Bolnykh     Kevin Boyd

 Aldert van Buuren   Rudi van Drunen     Anton Feenstra       Alan Gray

  Gerrit Groenhof     Anca Hamuraru    Vincent Hindriksen  M. Eric Irrgang

  Aleksei Iupinov   Christoph Junghans     Joe Jordan     Dimitrios Karkoulis

    Peter Kasson        Jiri Kraus      Carsten Kutzner      Per Larsson

  Justin A. Lemkul    Viveca Lindahl    Magnus Lundborg     Erik Marklund

    Pascal Merz     Pieter Meulenhoff    Teemu Murtola       Szilard Pall

    Sander Pronk      Roland Schulz      Michael Shirts    Alexey Shvetsov

   Alfons Sijbers     Peter Tieleman      Jon Vincent      Teemu Virolainen

 Christian Wennberg    Maarten Wolf      Artem Zhmurov

                            and the project leaders:

        Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel



Copyright (c) 1991-2000, University of Groningen, The Netherlands.

Copyright (c) 2001-2019, The GROMACS development team at

Uppsala University, Stockholm University and

the Royal Institute of Technology, Sweden.

check out http://www.gromacs.org for more information.


GROMACS is free software; you can redistribute it and/or modify it

under the terms of the GNU Lesser General Public License

as published by the Free Software Foundation; either version 2.1

of the License, or (at your option) any later version.

...
...
...

Using 4 MPI processes

Non-default thread affinity set, disabling internal thread affinity

Using 4 OpenMP threads per MPI process

starting mdrun 'Water'

5000 steps,     10.0 ps.

Writing final coordinates.

Dynamic load balancing report:

 DLB was off during the run due to low measured imbalance.

 Average load imbalance: 0.5%.

 The balanceable part of the MD step is 60%, load imbalance is computed from this.

 Part of the total run time spent waiting due to load imbalance: 0.3%.


               Core t (s)   Wall t (s)        (%)

       Time:    19057.053     1191.067     1600.0

                  (ns/day)    (hour/ns)

Performance:         0.726       33.079



GROMACS reminds you: "The Microsecond is Within Reach" (P.J. Van Maaren)

Running Gromacs in Batch Mode

  • We can also test this in a normal batch mode. Here is an example batch script:
 #!/bin/bash

#SBATCH -J gromacs_test
#SBATCH -q embers
#SBATCH -A [Account]
#SBATCH -N 4 -n 4                         
#SBATCH -t 60

module load gromacs/2020

#name of the executable
exe="gmx_mpi"
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

#run the application
mpirun -bootstrap slurm -n $SLURM_NTASKS $exe mdrun -s topol.tpr
  • The following SBATCH script will have the same output as above
  • Congratulations! you have succesfully run Gromacsx on the cluster.