Updated 2023-03-31

Running ASE on the Cluster

Overview

  • ASE (Atomic Simulation Environment) is a set of tools and Python modules for setting up, steering, and anlyzing atomistic simulations.
  • This guide will cover how to run a Python script using ASE on the Cluster

Tips

  • The latest version of ASE that is offered is ase/3.22.1, so please be wary of what ASE modules you are using and whether they are a part of version 3.22.1.

Walkthrough: Run ASE on the Cluster

  • This walkthrough will run a Python script that uses ASE to create and print the positions of 4 Ni atoms. The Python script can be found here
  • SBATCH script can be found here
  • You can transfer the files to your account on the cluster to follow along. The file transfer guide may be helpful.

Part 1: The SBATCH Script

#!/bin/bash
#SBATCH -JaseTest
#SBATCH -A [Account] 
#SBATCH -N1 --ntasks-per-node=2
#SBATCH --mem-per-cpu=2G
#SBATCH -t3
#SBATCH -qinferno
#SBATCH -oReport-%j.out

cd $SLURM_SUBMIT_DIR
module load ase/3.22.1
python aseTest.py
  • The #SBATCH directives are standard, requesting just 3 minutes of walltime and 1 node with 2 cores. More on #SBATCH directives can be found in the Using Slurm on Phoenix Guide
  • $SLURM_SUBMIT_DIR is simply a variable that represents the directory you submit the SBATCH script from.
  • Output Files will also show up in this dir as well
  • module load ase/3.22.1 loads the 3.22.1 version of ASE. To see what ASE versions are available, run module spider ase, and load the one you want. The other module are dependencies that must be loaded before ASE is loaded.
  • python aseTest.py runs the Python script using ASE.

Part 2: Submit Job and Check Status

  • Be sure to change to the directory that contains the SBATCH Script
  • Submit as normal, with qsub <sbatch scriptname.sbatch>. In this case sbatch ase.sbatch
  • Check job status with squeue --job <jobID>, replacing with the jobid returned after running sbatch
  • You can delete the job with scancel <jobID> , replacing with the jobid returned after running sbatch

Part 3: Collecting Results

  • In the directory where you submitted the SBATCH script, you should see a Report-<jobID>.out file, which contains the results of the job. Use cat Report-<jobID>.out or open the file in a text editor to take a look.
  • Report-.out should look like this:
---------------------------------------
Begin Slurm Prolog: Nov-30-2022 23:47:24
Job ID:    130779
User ID:   svangala3
Account:   phx-pace-staff
Job name:  aseTest
Partition: cpu-small
QOS:       inferno
---------------------------------------
Atom('Ni', [0.0, 0.0, 0.0], index=0)
Atom('Ni', [0.5, 0.0, 0.0], index=1)
Atom('Ni', [0.0, 0.5, 0.0], index=2)
Atom('Ni', [0.5, 0.5, 0.0], index=3)
[[0.  0.  0. ]
 [0.5 0.  0. ]
 [0.  0.5 0. ]
 [0.5 0.5 0. ]]
[[0.  0.  0. ]
 [0.6 0.  0. ]
 [0.  0.5 0. ]
 [0.5 0.5 0. ]]
[[ 1.  0.  0.]
 [ 0.  1.  0.]
 [ 0.  0.  1.]]
---------------------------------------
Begin Slurm Epilog: Nov-30-2022 23:47:29
Job ID:        130779
Array Job ID:  _4294967294
User ID:       svangala3
Account:       phx-pace-staff
Job name:      aseTest
Resources:     cpu=2,mem=4G,node=1
Rsrc Used:     cput=00:00:08,vmem=824K,walltime=00:00:04,mem=0,energy_used=0
Partition:     cpu-small
QOS:           inferno
Nodes:         atl1-1-02-003-17-1
---------------------------------------
  • After the result files are produced, you can move the files off the cluster, refer to the file transfer guide for help.
  • Congratulations! You successfully ran a Python script using ASE program on the cluster.