Updated 2023-03-31
Run Cap’n Proto C++ on the Cluster¶
Summary¶
- Cap'n Proto C++ is a fast data interchange format and RPC system
- Use
module avail capnproto
to see all the available versions on the cluster. Currently, on RHEL6, onlycapnproto-c++/0.8.0
is available - To load Cap'n Proto in your
SBATCH
script:``- Load Cap'n Proto with
module load capnproto/0.8.0
- Load Cap'n Proto with
- To run Cap'n Proto:
- In your
SBATCH
script, put all lines executing Cap'n Proto after themodule load
line. - Example:
capnp compile -oc++ myschmea.capnp
would go in theSBATCH
script after the lines that load the correct modules for Cap'n Proto
- In your
Walkthrough: Run Capnproto on the Cluster¶
SBATCH
Script can be found here.- The capnproto_tutorial file contains the input, all output the files with this documentation.
- You can transfer the files to your account on the cluster to follow along. The file transfer guide may be helpful.
Part 1: The SBATCH Script¶
#!/bin/bash
#SBATCH -JcapnprotoTest
#SBATCH -A [Account]
#SBATCH -N2 --ntasks-per-node=4
#SBATCH -t20
#SBATCH -qinferno
#SBATCH -oReport-%j.out
cd $SLURM_SUBMIT_DIR
module load capnproto/0.8.0
capnp compile -oc++ myschema.capnp
- The
#SBATCH
directives are standard, requesting 20 min of walltime and 2 nodes with 4 cores per node. More on#SBATCH
directives can be found in the Using Slurm on Phoenix Guide
Note
If using $SLURM_SUBMIT_DIR
, the .capnp
file, as well as any other files required for the job, must be stored in the same folder as the SBATCH
script
$SLURM_SUBMIT_DIR
is simply a variable that represents the directory you submit the SBATCH script from. Make sure the.capnp
files, and any other files you need are in the same directory you put theSBATCH
script in. This line tells the cluster to enter this directory where you have stored theSBATCH
script, and look for all the files for the job. If you use$SLURM_SUBMIT_DIR
, you need to have all your files in the same folder as yourSBATCH
script otherwise the cluster won't be able to find the files it needs.- The
module load
lines load Cap'n Proto and its dependent module (gcc) - The
capnp compile
line is just a general example showing how Cap'n Proto might be used, and is taken from the Cap'n Proto Documentation. Check out the documentation for much more info on Cap'n Proto's functionality. - The point of the
capnp compile
line is to show that commands executing the program must be included after:- Entering the correct folder with all the files and
SBATCH
script, in this case achieved with`cd $SLURM_SUBMIT_DIR
- Cap'n Proto is loaded with the
module load
lines
- Entering the correct folder with all the files and
Path 3: Collecting Results¶
- Make sure you're in the dir that contains the
SBATCH
Script as well as the.capnp
file - Submit as normal, with
sbatch < script name>
. In this casesbatch capnproto.sbatch
- Check job status with
squeue --job <jobID>
, replacing with the jobid returned after running sbatch - You can delete the job with
scancel <jobID>
, replacing with the jobid returned after running sbatch
Collecting Results¶
- All files created will be in the same folder where your
SBATCH
script is (same directory you ransbatch
from) - The
.out
file will be found here as well. It contains the results of the job, as well as diagnostics and a report of resources used during the job. If the job fails or doesn't produce the result your were hoping for, the.out
file is a great debugging tool. - Here is what the
Report-<jobID>.out
:
---------------------------------------
Begin Slurm Prolog: Jan-19-2023 14:38:38
Job ID: 529864
User ID: svangala3
Account: phx-pace-staff
Job name: capnprotoTest
Partition: cpu-small
QOS: inferno
---------------------------------------
---------------------------------------
Begin Slurm Epilog: Jan-19-2023 14:38:39
Job ID: 529864
Array Job ID: _4294967294
User ID: svangala3
Account: phx-pace-staff
Job name: capnprotoTest
Resources: cpu=8,mem=8G,node=2
Rsrc Used: cput=00:00:08,vmem=4492K,walltime=00:00:01,mem=0,energy_used=0
Partition: cpu-small
QOS: inferno
Nodes: atl1-1-02-014-20-2,atl1-1-02-014-27-2
---------------------------------------
- You can transfer the resulting files off the cluster using file transfer guide