Updated 2023-03-31

Run Cap’n Proto C++ on the Cluster

Summary

  • Cap'n Proto C++ is a fast data interchange format and RPC system
  • Use module avail capnproto to see all the available versions on the cluster. Currently, on RHEL6, only capnproto-c++/0.8.0 is available
  • To load Cap'n Proto in your SBATCH script:``
    • Load Cap'n Proto with module load capnproto/0.8.0
  • To run Cap'n Proto:
    • In your SBATCH script, put all lines executing Cap'n Proto after the module load line.
    • Example: capnp compile -oc++ myschmea.capnp would go in the SBATCH script after the lines that load the correct modules for Cap'n Proto

Walkthrough: Run Capnproto on the Cluster

Part 1: The SBATCH Script

#!/bin/bash
#SBATCH -JcapnprotoTest
#SBATCH -A [Account]
#SBATCH -N2 --ntasks-per-node=4
#SBATCH -t20
#SBATCH -qinferno
#SBATCH -oReport-%j.out

cd $SLURM_SUBMIT_DIR
module load capnproto/0.8.0
capnp compile -oc++ myschema.capnp
  • The #SBATCH directives are standard, requesting 20 min of walltime and 2 nodes with 4 cores per node. More on #SBATCH directives can be found in the Using Slurm on Phoenix Guide

Note

If using $SLURM_SUBMIT_DIR, the .capnp file, as well as any other files required for the job, must be stored in the same folder as the SBATCH script

  • $SLURM_SUBMIT_DIR is simply a variable that represents the directory you submit the SBATCH script from. Make sure the .capnp files, and any other files you need are in the same directory you put the SBATCH script in. This line tells the cluster to enter this directory where you have stored the SBATCH script, and look for all the files for the job. If you use $SLURM_SUBMIT_DIR, you need to have all your files in the same folder as your SBATCH script otherwise the cluster won't be able to find the files it needs.
  • The module load lines load Cap'n Proto and its dependent module (gcc)
  • The capnp compile line is just a general example showing how Cap'n Proto might be used, and is taken from the Cap'n Proto Documentation. Check out the documentation for much more info on Cap'n Proto's functionality.
  • The point of the capnp compile line is to show that commands executing the program must be included after:
    • Entering the correct folder with all the files and SBATCH script, in this case achieved with `cd $SLURM_SUBMIT_DIR
    • Cap'n Proto is loaded with the module load lines

Path 3: Collecting Results

  • Make sure you're in the dir that contains the SBATCH Script as well as the .capnp file
  • Submit as normal, with sbatch < script name>. In this case sbatch capnproto.sbatch
  • Check job status with squeue --job <jobID>, replacing with the jobid returned after running sbatch
  • You can delete the job with scancel <jobID> , replacing with the jobid returned after running sbatch

Collecting Results

  • All files created will be in the same folder where your SBATCH script is (same directory you ran sbatch from)
  • The .out file will be found here as well. It contains the results of the job, as well as diagnostics and a report of resources used during the job. If the job fails or doesn't produce the result your were hoping for, the .out file is a great debugging tool.
  • Here is what the Report-<jobID>.out:
---------------------------------------
Begin Slurm Prolog: Jan-19-2023 14:38:38
Job ID:    529864
User ID:   svangala3
Account:   phx-pace-staff
Job name:  capnprotoTest
Partition: cpu-small
QOS:       inferno
---------------------------------------
---------------------------------------
Begin Slurm Epilog: Jan-19-2023 14:38:39
Job ID:        529864
Array Job ID:  _4294967294
User ID:       svangala3
Account:       phx-pace-staff
Job name:      capnprotoTest
Resources:     cpu=8,mem=8G,node=2
Rsrc Used:     cput=00:00:08,vmem=4492K,walltime=00:00:01,mem=0,energy_used=0
Partition:     cpu-small
QOS:           inferno
Nodes:         atl1-1-02-014-20-2,atl1-1-02-014-27-2
---------------------------------------