Updated 2022-11-28

Run LAMMPS on the Cluster

Summary

  • Use module avail lammps to see all available versions of LAMMPS on the cluster.
  • To load LAMMPS in your SBATCH script:
    • module load lammps/22Aug18. Replace the date with the version you want to load.
  • To run LAMMPS:
    • In your SBATCH script, put all the lines to execute LAMMPS after the module load lines that load LAMMPS

Warning

If using mpirun to execute LAMMPS, you must set the number of processors to the number you requested in your SBATCH script. Example: if you requested 8 nodes (2 nodes and 4 proc. per node), you would set the -np option as -np 8

Example SBATCH Script

#!/bin/bash
#SBATCH -J SBATCHlammpsTest
#SBATCH -A phx-pace-staff
#SBATCH -N 2 --ntasks-per-node=4
#SBATCH -t 10
#SBATCH -q inferno
#SBATCH -o Report-%j.out
#SBATCH --mail-type=BEGIN,END,FAIL
#SBATCH --mail-user=gburdell@gatech.edu

cd $SLURM_SUBMIT_DIR
module load intel/20.0.4 mvapich2/2.3.6 lammps/20220107-mva2
srun -n 8 lmp < filename.in
  • The #SBATCH directives are standard, requesting 10 min of walltime and 2 nodes with 4 cores per node. More on #SBATCH directives can be found in the SBATCH guide
  • $SLURM_SUBMIT_DIR is simply a variable that represents the folder you submit the SBATCH script from. Make sure the .in file, and any other files you need are in the same directory you put the SBATCH script in. This line tells the cluster to enter this directory where you have stored the SBATCH script, and look for all the files for the job. If you use $SLURM_SUBMIT_DIR, you need to have all your files in the same folder as your SBATCH script otherwise the cluster won't be able to find the files it needs.
  • Output Files, such as the lammps.log file will also show up in the same folder as the SBATCH script
  • The module load lines loads LAMMPS
  • mpirun -np 8 lmp < filename.in executes LAMMPS with a .in file. Note, this is just a general example line, and there are many more options to run LAMMPS. For more options, check out the LAMMPS documentation.
  • The point with the example line is to show how the -np flag is used. Here, 8 processors are specified after -np, as 8 processors were requested (2 nodes x 4 proc per node)

Submit Job and Check Status

  • Make sure you're in the directory that contains the SBATCH script, the sequence files, and any other files you need.
  • Submit as normal, with sbatch <SBATCH script name>. In this case sbatch lammps.SBATCH or whatever you called the SBATCH script. You can name the SBATCH scripts whatever you want, just keep the .SBATCH at the end
  • Check job status with squeue -u username3 -n, replacing "username3" with your gt username
  • You can delete the job with scancel 22182721, replacing the number with the jobid returned after running qsub
  • Depending on the resources requested and queue the job is run on, it may take varying amounts of time for the job to start. To estimate the time until the job executes, run showstart 22182721, replacing the number with the jobid returned after running qsub. More helpful commands can be found in this guide

Collecting Results

  • All files created will be in the same folder where your SBATCH script is (same directory you ran sbatch from)
  • The .out file will be found here as well. It contains the results of the job, as well as diagnostics and a report of resources used during the job. If the job fails or doesn't produce the result your were hoping for, the .out file is a great debugging tool.
  • You can transfer the resulting files off the cluster using scp or a file transfer service