Updated 2023-03-31

Run Gaussian on the Cluster

Overview

  • Gaussian provides state-of-the-art capabilities for electronic structure modeling.
  • Gaussian 16 is licensed for a wide variety of computer systems.
  • All versions of Gaussian 16 contain every scientific/modeling feature, and imposes any artificial limitations on calculations other than your resources and patience.
  • This module sets up Gaussian version G16.
  • This guide will cover how to run Gaussian on the Cluster.

Walkthrough: Run Gaussian on the Cluster

  • This walkthrough will cover an example on running Gaussian on a water file.
  • test_g16.sh can be found here
  • water.chk can be found here
  • SBATCH Script can be found here
  • You can transfer the files to your account on the cluster to follow along. The file transfer guide may be helpful.

Part 1: The SBATCH Script

#!/bin/bash
#SBATCH -JgaussianTest
#SBATCH -A [Account]
#SBATCH -N1 --ntasks-per-node=1
#SBATCH -t2
#SBATCH -qinferno
#SBATCH -oReport-%j.out

cd $SLURM_SUBMIT_DIR
module load gaussian/16

bash test_g16.sh
  • The #SBATCH directives are standard, requesting just 2 minute of walltime and 1 node with 1 core. More on #SBATCH directives can be found in the Using Slurm on Phoenix Guide
  • $SLURM_SUBMIT_DIR is simply a variable that represents the directory you submit the SBATCH script from. Make sure the files you want to use are in the same directory you put the SBATCH script.
  • Output Files will also show up in this directory as well
  • module load gaussian loads the default 16 version of Gaussian. To see what Gaussian versions are available, run module avail gaussian, and load the one you want.
  • bash test_g16.sh runs the input test_g16.sh.

Part 2: Submit Job and Check Status

  • Make sure you're in the directory that contains the SBATCH Script and the .sh files
  • Submit as normal, with <sbatch scriptname.sbatch>. In this case sbatch gaussian.sbatch
  • Check job status with squeue --job <jobID>, replacing with the jobid returned after running sbatch
  • You can delete the job with scancel <jobID> , replacing with the jobid returned after running sbatch

Part 3: Collecting Results

    • In the directory where you submitted the SBATCH script, you should see a Report-<jobID>.out file which contains the results of the job and a water.log file.
  • The water.log can be found here
  • Report-<jobID>.out should look like this:z
---------------------------------------
Begin Slurm Prolog: Mar-02-2023 15:03:16
Job ID:    859452
User ID:   svangala3
Account:   phx-pace-staff
Job name:  gaussianTest
Partition: cpu-small
QOS:       inferno
---------------------------------------
Job done.
---------------------------------------
Begin Slurm Epilog: Mar-02-2023 15:03:16
Job ID:        254450
Array Job ID:  _4294967294
User ID:       svangala3
Account:       phx-pace-staff
Job name:      samtoolsTest
Resources:     cpu=2,mem=16G,node=1
Rsrc Used:     cput=00:00:04,vmem=4556K,walltime=00:00:02,mem=0,energy_used=0
Partition:     cpu-small
QOS:           inferno
Nodes:         atl1-1-02-004-24-2
---------------------------------------
  • To transfer notebook files off the cluster, you can use a number of file transfer techniques. Globus is recommended.
  • Congratulations! You have succesfully run Gaussian on the cluster.