Run NCO on the Cluster¶
- netCDF Operators is a suite of programs designed to facilitate manipulation and analysis of self-describing data stored in netCDF format.
- This guide will cover how to run NCO on the Cluster.
- You can find NCO home page here.
- You can find a useful slideshow about using NCO here.
- NCO includes many different programs, so the exact commands you execute depend on what operation you are trying to perform on your netCDF file.
- For this guide, we will show you how to run an NCO operation on the Cluster which you can substitute for any other operation that fits your needs.
Walkthrough: Run [Software] on the Cluster¶
- This walkthrough will cover how to output longitude and latitude data from a netCDF file.
Input file: sresa1b_ncar_ccsm3-example.nc
SBATCHScript can be found here
- You can transfer the files to your account on the cluster to follow along. The file transfer guide may be helpful.
Part 1: The SBATCH Script¶
#!/bin/bash #SBATCH -JncoTest #SBATCH -A [Account] #SBATCH -N1 --ntasks-per-node=2 #SBATCH --mem-per-cpu=2G #SBATCH -t3 #SBATCH -qinferno #SBATCH -oReport-%j.out cd $SBATCH module load gcc/10.3.0 module load mvapich2/2.3.6 module load nco/5.0.1 ncks -v lat,lon sresa1b_ncar_ccsm3-example.nc
#SBATCHdirectives are standard, requesting just 3 minutes of walltime and 1 node with 2 cores. More on
#SBATCHdirectives can be found in the Using Slurm on Phoenix Guide
$SBATCHis a variable that represents the directory you submit the SBATCH script from. Make sure the files you want to use are in the same directory you put the SBATCH script.
- Output Files will also show up in this dir as well
module load nco/5.0.1loads the 4.6.0 version of NCO. To see what NCO versions are available, run
module avail NCO, and load the one you want. The other module are dependencies that must be loaded before NCO is loaded.
ncks -v lat,lon sresa1b_ncar_ccsm3-example.ncis used to output the lat and lon variables from
Part 2: Submit Job and Check Status¶
- Make sure you're in the dir that contains the
SBATCHScript as well as
- Submit as normal, with
sbatch <script name>. In this case
- Check job status with
squeue --job <jobID>, replacing with the jobid returned after running sbatch
- You can delete the job with
scancel <jobID>, replacing with the jobid returned after running sbatch
Part 3: Collecting Results¶
Report-<jobID>.out should look like this
* After the result files are produced, you can move the files off the cluster, refer to the file transfer guide for help.
* Congratulations! You successfully ran NCO on the cluster.