Updated 2023-03-31

Run Perl on the Cluster

Overview

  • Perl is relatively straightforward to use. Simply load the module, then run using the perl command in your SBATCH script

Tips

  • In the SBATCH script load perl with module load perl. You can use module avail perl to see what versions of perl are available.
  • In the computation part of your SBATCH script, enter the folder where you have the Perl script. If it is in the same directory you have/are submitting the SBATCH script from, you can use cd $SLURM_SUBMIT_DIR.
  • In the SBATCH script, use perl <perlfilename.pl> to run the script.

Walkthrough: Run Perl on the Cluster

  • This walkthrough will use a simple Perl script that prints "Hello World"
  • Perl script (hello_world.pl) can be found here
  • SBATCH script (perl.sbatch) can be found here
  • You can transfer the files to your account on the cluster to follow along. The file transfer guide may be helpful.

Part 1: The SBATCH Script

#!/bin/bash
#SBATCH -Jperl
#SBATCH -A [Account] 
#SBATCH -N1 --ntasks-per-node=2
#SBATCH --mem-per-cpu=2G
#SBATCH -t1
#SBATCH -qinferno
#SBATCH -oReport-%j.out

cd $SLURM_SUBMIT_DIR
module load perl/5.34.1
perl hello_world.pl
  • The #SBATCH directives are standard, requesting just 1 minute of walltime and 1 node with 2 cores. More on #SBATCH directives can be found in the Using Slurm on Phoenix Guide
  • $SLURM_SUBMIT_DIR is simply a variable that represents the directory you submit the SBATCH script from.

Warning

Make sure the perl script you want to run (in this case, hello_world.pl) is in the same directory you put the SBATCH script.

  • perl hello_world.pl runs the script

Part 3: Submit Job and Check Status

  • Make sure you're in the directory that contains the SBATCH Script as well as the perl program
  • Submit as normal, with qsub <sbatch scriptname.sbatch>. In this case sbatch perl.sbatch
  • Check job status with squeue --job <jobID>, replacing with the jobid returned after running sbatch
  • You can delete the job with scancel <jobID> , replacing with the jobid returned after running sbatch

Part 3: Collecting Results

  • In the directory where you submitted the SBATCH script, you should see a Report-<jobID>.out file. Use cat Report-<jobID>.out or open the file in a text editor to take a look.
  • Report-<jobID>.out should look like this:
---------------------------------------
Begin Slurm Prolog: Dec-15-2022 18:33:26
Job ID:    190111
User ID:   svangala3
Account:   phx-pace-staff
Job name:  perl
Partition: cpu-small
QOS:       inferno
---------------------------------------
Hello world
---------------------------------------
Begin Slurm Epilog: Dec-15-2022 18:33:27
Job ID:        190111
Array Job ID:  _4294967294
User ID:       svangala3
Account:       phx-pace-staff
Job name:      perl
Resources:     cpu=2,mem=4G,node=1
Rsrc Used:     cput=00:00:02,vmem=1160K,walltime=00:00:01,mem=0,energy_used=0
Partition:     cpu-small
QOS:           inferno
Nodes:         atl1-1-02-017-33-2
---------------------------------------
  • After the result files are produced, you can move the files off the cluster, refer to the file transfer guide for help.
  • Congratulations! You successfully ran a Perl script on the cluster.