Convert PBS Scripts to Slurm Scripts¶
SLURM is a resource manager with scheduling logic integrated into it. In comparison to Moab/Torque, SLURM eliminates the need for dedicated queues. In addition to allocation of resources at the job level, jobs spawn steps (srun instances), which are further allocated resources from within the job's allocation. The job steps can therefore execute sequentially or concurrently.
- Be sure to recompile software you have written or installed, particularly if it uses MPI. The Slurm cluster contains updated libraries.
module loadcommands to the current software offerings on Hive.
- Look up your tracking account with
How is Slurm usage different from Torque/Moab?¶
- What Moab called queues, Slurm calls partitions. There is no default partition, user must specify one.
- Resources are assigned per task/process. One core is given per task by default.
- Environment variables from the submitting process are passed to the job by default. Use
--export=NONEto have a clean environment when running jobs. The default means that variables like
$HOSTNAMEwill be cloned from the login node when jobs are submitted from it.
- Jobs can be submitted to multiple partitions to run on the first one with availability. User can provide a comma-separated list of partitions in the submission script.
- First line of job script in Slurm must be
#!<shell>, see conversion examples section below.
- Slurm jobs start in the submission directory rather than
- Slurm jobs have stdout and stderr output log files combined by default. For writing to a separate file, user can provide
-eoption. In Moab, stdout and stderr would go to different files by default, and they were merged with
- Slurm can send email when your job reaches certain percentage of walltime limit. Ex:
sbatch --mail-type=TIME_LIMIT_90 myjob.txt
- The default memory request on Slurm is 1 GB/core. To request all the memory on a node, include
- Requesting a number of nodes or cores is structured differently. To request an exact number of nodes, use
-N. To request an exact number of cores per node, use
--ntasks-per-node. To request a total number of cores, use
- The commands that you use to submit and manage jobs on the cluster are different for SLURM than they were for Moab. To submit jobs, you will now use
sruncommands. To check the status, you will most commonly use
- Arrays are given
SLURM_ARRAY_JOB_IDfor the parent job and each child job gets a
SLURM_JOB_ID. Moab would assign the same
PBS_JOBIDto each job with a different index. For more options and guideline how to use arrays in SLURM, please visit the Array Jobs section in the following page.
- To include environment variables for naming output files in SLURM, you need to use file patters as follows: Job name
%x, Job id
%j, Job array id
srunis the standard SLURM command to start an MPI program. It automatically uses the allocated job resources: nodelist, tasks, logical cores per task. Do not use
Do not use
mpiexec with Slurm. Use
This table lists the most common commands, environment variables, and job specification options used by the major workload management systems. Users can refer to this cheat sheet for converting their PBS scripts to SLURM scripts and user commands. A full list of SLURM commands can be found here. Further guidelines on more advanced scripts are in the user documentation on this page.
The following PBS script commands can be rewritten as a SLURM script below.
Job Submission Examples¶
(Multiple srun can execute in parallel using &)
This material is based upon work supported by the National Science Foundation under grant number 1828187. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.