Updated 2023-03-31
Run OpenBlas on the Cluster¶
Overview¶
- OpenBLAS is an optimized BLAS (Basic Linear Algebra Subprograms) library based on GotoBLAS2 1.13 BSD version.
- Please read the documentation on the OpenBLAS wiki pages: https://github.com/xianyi/OpenBLAS/wiki.
Summary¶
- This guide will cover how to run OpenBlas on the Cluster.
Walkthrough: Run OpenBlas on the Cluster¶
- This walkthrough will cover a simple example of using BLAS matrix multipy code sgemm
blas_test.c
can be found here- SBATCH Script can be found here
- You can transfer the files to your account on the cluster to follow along. The file transfer guide may be helpful.
Part 1: The SBTACH Script¶
#!/bin/bash
#SBATCH -JopenblasTest
#SBATCH -A [Account]
#SBATCH -N1 --ntasks-per-node=1
#SBATCH -t15
#SBATCH -qinferno
#SBATCH -oReport-%j.out
cd $SLURM_SUBMIT_DIR
module load openblas
gcc blas_test.c -lopenblas
- The
#SBATCH
directives are standard, requesting 15 minutes of walltime and 1 node with 1 task per node. More on#SBATCH
directives can be found in the Using Slurm on Phoenix Guide $SLURM_SUBMIT_DIR
is a variable that represents the directory you submit the SBATCH script from. Make sure the files you want to use are in the same directory you put the SBATCH script.gcc
is used to compileblas_test.c
.- Output Files will also show up in this directory as well
- To see what OpenBlas versions are available and to load which modules, run
module spider openblas
, and load the ones you want.
Part 2: Submit Job and Check Status¶
- Make sure you're in the dir that contains the
SBATCH
Script and theblas_test.c
file - Submit as normal, with
sbatch <script name>
. In this casesbatch openblas.sbatch
- Check job status with
squeue --job <jobID>
, replacing with the jobid returned after running sbatch - You can delete the job with
scancel <jobID>
, replacing with the jobid returned after running sbatch
Part 3: Collecting Results¶
- In the directory where you submitted the
SBATCH
script, you should see aReport-<jobID>.out
file which contains the results of the job and thea.out
file. - The
a.out
file can be found here Report-<jobID>.out
should look like this:
---------------------------------------
Begin Slurm Prolog: Feb-16-2023 21:56:22
Job ID: 727254
User ID: svangala3
Account: phx-pace-staff
Job name: openblasTest
Partition: cpu-small
QOS: inferno
---------------------------------------
---------------------------------------
Begin Slurm Epilog: Feb-16-2023 21:56:23
Job ID: 727254
Array Job ID: _4294967294
User ID: svangala3
Account: phx-pace-staff
Job name: openblasTest
Resources: cpu=1,mem=1G,node=1
Rsrc Used: cput=00:00:01,vmem=1752K,walltime=00:00:01,mem=0,energy_used=0
Partition: cpu-small
QOS: inferno
Nodes: atl1-1-03-004-1-2
---------------------------------------
- After the result files are produced, you can move the files off the cluster, refer to the file transfer guide for help.
- Congratulations! You successfully ran Openblas on the cluster.