Updated 2023-03-31

Run Ruby on the Cluster


  • In the SBATCH script load ruby with module load ruby/3.1.0.


You can use module avail ruby to see what versions of ruby are available.

  • In the computation part of your SBATCH script, enter the folder where you have stored the Ruby script (using cd). If it is in the same dir you have/are submitting the SBATCH script from, you can use $SLURM_SUBMIT_DIR
  • To run the Ruby script, use ruby <rubyScriptName.rb> to run the script.

Walkthrough : Run Ruby on the Cluster

  • This walkthrough will use a simple Ruby script that prints "Hello World"
  • Ruby script: helloWorld.rb
  • SBATCH script: ruby.sbatch
  • You can transfer the files to your account on the cluster to follow along. The file transfer guide may be helpful.

Part 1: The SBATCH Script

#SBATCH -Jruby
#SBATCH -A [Account] 
#SBATCH -N1 --ntasks-per-node=2
#SBATCH -qinferno
#SBATCH -oReport-%j.out

module load ruby/3.1.0
ruby helloWorld.rb
  • The #SBATCH directives are standard, requesting just 1 minutes of walltime and 1 node with 2 cores. More on #SBATCH directives can be found in the Using Slurm on Phoenix Guide
  • $SLURM_SUBMIT_DIR is simply a variable that represents the directory you submit the SBATCH script from.


Make sure the ruby script you want to run (in this case, helloWorld.rb) is in the same directory you put the SBATCH script.

  • ruby helloWorld.rb runs the script

Part 2: Submit Job and Check Status

  • Make sure you're in the directory that contains the SBATCH Script
  • Submit as normal, with qsub <sbatch scriptname.sbatch>. In this case sbatch ruby.sbatch
  • Check job status with squeue --job <jobID>, replacing with the jobid returned after running sbatch
  • You can delete the job with scancel <jobID> , replacing with the jobid returned after running sbatch

Part 3: Collecting Results

  • In the directory where you submitted the SBATCH script, you should see a Report-<jobID>.out file, which contains the results of the job. Use cat Report-<jobID>.out or open the file in a text editor to take a look.
  • Report-<jobID>.out should look like this:
Begin Slurm Prolog: Nov-28-2022 01:01:16
Job ID:    116565
User ID:   svangala3
Account:   phx-pace-staff
Job name:  ruby
Partition: cpu-small
QOS:       inferno
Hello World!
Begin Slurm Epilog: Nov-28-2022 01:01:21
Job ID:        116565
Array Job ID:  _4294967294
User ID:       svangala3
Account:       phx-pace-staff
Job name:      ruby
Resources:     cpu=2,mem=2G,node=1
Rsrc Used:     cput=00:00:10,vmem=2232K,walltime=00:00:05,mem=0,energy_used=0
Partition:     cpu-small
QOS:           inferno
Nodes:         atl1-1-02-004-5-1
  • After the result files are produced, you can move the files off the cluster, refer to the file transfer guide for help.
  • Congratulations! You successfully ran a Java program on the cluster.