Updated 2021-05-17

Run LAMMPS on the Cluster


  • Use module avail lammps to see all available versions of LAMMPS on the cluster.
  • To load LAMMPS in your PBS script:
    • module load lammps/22Aug18. Replace the date with the version you want to load.
  • To run LAMMPS:
    • In your PBS script, put all the lines to execute LAMMPS after the module load lines that load LAMMPS


If using mpirun to execute LAMMPS, you must set the number of processors to the number you requested in your PBS script. Example: if you requested 8 nodes (2 nodes and 4 proc. per node), you would set the -np option as -np 8

Example PBS Script

#PBS -N lammpsTest
#PBS -A [Account]
#PBS -l nodes=2:ppn=4
#PBS -l walltime=10:00
#PBS -q inferno
#PBS -j oe
#PBS -o lammpsResult.out

module load lammps/22Aug18
mpirun -np 8 lmp < filename.in
  • The #PBS directives are standard, requesting 10 min of walltime and 2 nodes with 4 cores per node. More on #PBS directives can be found in the PBS guide
  • $PBS_O_WORKDIR is simply a variable that represents the folder you submit the PBS script from. Make sure the .in file, and any other files you need are in the same directory you put the PBS script in. This line tells the cluster to enter this directory where you have stored the PBS script, and look for all the files for the job. If you use $PBS_O_WORKDIR, you need to have all your files in the same folder as your PBS script otherwise the cluster won't be able to find the files it needs.
  • Output Files, such as the lammps.log file will also show up in the same folder as the PBS script
  • The module load lines loads LAMMPS
  • mpirun -np 8 lmp < filename.in executes LAMMPS with a .in file. Note, this is just a general example line, and there are many more options to run LAMMPS. For more options, check out the LAMMPS documentation.
  • The point with the example line is to show how the -np flag is used. Here, 8 processors are specified after -np, as 8 processors were requested (2 nodes x 4 proc per node)

Submit Job and Check Status

  • Make sure you're in the directory that contains the PBS script, the sequence files, and any other files you need.
  • Submit as normal, with qsub <pbs script name>. In this case qsub lammps.pbs or whatever you called the PBS script. You can name the PBS scripts whatever you want, just keep the .pbs at the end
  • Check job status with qstat -u username3 -n, replacing "username3" with your gt username
  • You can delete the job with qdel 22182721, replacing the number with the jobid returned after running qsub
  • Depending on the resources requested and queue the job is run on, it may take varying amounts of time for the job to start. To estimate the time until the job executes, run showstart 22182721, replacing the number with the jobid returned after running qsub. More helpful commands can be found in this guide

Collecting Results

  • All files created will be in the same folder where your PBS script is (same directory you ran qsub from)
  • The .out file will be found here as well. It contains the results of the job, as well as diagnostics and a report of resources used during the job. If the job fails or doesn't produce the result your were hoping for, the .out file is a great debugging tool.
  • You can transfer the resulting files off the cluster using scp or a file transfer service