Updated 2023-02-07

Use GSL on the Cluster

Overview

  • This guide will cover how to load and run the GNU Scientific Library
  • To use GSL in a C program, you must first sbatchload the GSL module with module load gsl in your SBATCH script.
  • After loading the module in your SBATCH script, you must add to the SBATCH script the necessary commands for compilation and execution of your C program.
  • Submit the job with scancel <.SBATCH filename>

Walkthrough : Use GSL on the Cluster

  • This walkthrough will use GSL to calculate the value of a Bessel function with an input of 5.
  • This example C program is from the GSL documentation. Create a file gslExample.c based on the example C program provided.
  • To follow along, manually copy these files into the same directory in your account on the cluster, or transfer them to your account (again put them in the same folder).

Part 1: The SBATCH Script

#!/bin/bash
#SBATCH -J gslTest
#SBATCH -A [Account] 
#SBATCH -N 1 --ntasks-per-node=1
#SBATCH --mem-per-cpu=2G
#SBATCH -t 1
#SBATCH -p inferno
#SBATCH -o Report-gsl-%j.out

cd $SLURM_SUBMIT_DIR
module load gcc
module load gsl
gcc -o gslExample gslExample.c `gsl-config --cflags --libs`
./gslExample
  • The #SBATCH directives are standard, requesting just 1 minute of walltime and 1 node with 2 cores.
  • $SLURM_SUBMIT_DIR is simply a variable that represents the directory you submit the SBATCH script from. Make sure the .c file you want to run is in the same directory you put the SBATCH script. Otherwise, the C program wont be found when the job is run.
  • Output Files, such as the resulting executable, will also show up in the same directory as the SBATCH script
  • The module load lines load gcc (compiler) and gsl.
  • The gcc line links the GSL library and compiles the C program into an executable called gslExample .
  • ./gslExample runs the program

Part 2: Submit Job and Check Status

  • Make sure you're in the directory that contains the SBATCH script and the .c file
  • Submit as normal, with sbatch <SBATCH script name>. In this case sbatch gsl.SBATCH
  • Check job status with squeue -u username3, replacing "username3" with your gtusername
  • You can delete the job with squeue 22182721, replacing the number with the jobid returned after running sbatch

Part 3: Collecting Results

  • In the directory where you submitted the SBATCH script, you should see a couple of newly generated files, including gslExample and gsl.out
  • gslExample is the executable file created
  • gsl.out contains the results of the job
  • Open gsl.out with vim gsl.out (you can use any text editor).
  • The output should contain the result of the computation:
---------------------------------------
Begin Slurm Prolog: Feb-06-2023 11:16:47
Job ID:    645516
User ID:   gburdell3
Account:   gts-gburdell3
Job name:  gslTest
Partition: cpu-small
QOS:       inferno
---------------------------------------
J0(5) = -1.775967713143382642e-01
---------------------------------------
Begin Slurm Epilog: Feb-06-2023 11:16:53
Job ID:        645516
Array Job ID:  _4294967294
User ID:       gburdell3
Account:       gts-gburdell3
Job name:      gslTest
Resources:     cpu=2,mem=4G,node=1
Rsrc Used:     cput=00:00:12,vmem=9268K,walltime=00:00:06,mem=0,energy_used=0
Partition:     cpu-small
QOS:           inferno
Nodes:         atl1-1-03-004-1-1
---------------------------------------
  • After the result files are produced, you can move the out file as well as any other files off the cluster. Refer to the file transfer guide for help.
  • Congratulations! You successfully used GSL on the cluster.