Convert PBS Scripts to Slurm Scripts¶
Use this guide to convert your PBS scripts from prior to May 2023 to Slurm scripts.
SLURM is a resource manager with scheduling logic integrated into it. In comparison to Moab/Torque, SLURM eliminates the need for dedicated queues. In addition to allocation of resources at the job level, jobs spawn steps (srun instances), which are further allocated resources from within the job's allocation. The job steps can therefore execute sequentially or concurrently.
Visit our Slurm on ICE guide for more information about using Slurm.
- Be sure to recompile software you have written or installed, particularly if it uses MPI. The Slurm cluster contains updated libraries.
module loadcommands to the current software offerings on ICE.
How is Slurm usage different from Torque/Moab?¶
- What Moab called queues, Slurm calls partitions. On ICE, the partitions are automatically assigned based on the resources requested, and the Quality of Service (QOS) option (coc-ice or pace-ice). You will not be able to specify the partition.
- Resources are assigned per task/process. One core is given per task by default.
- Environment variables from the submitting process are passed to the job by default. Use
--export=NONEto have a clean environment when running jobs. The default means that variables like
$HOSTNAMEwill be cloned from the login node when jobs are submitted from it.
- First line of the job script in Slurm must be
#!<shell>, see conversion examples section below.
- Slurm jobs start in the submission directory rather than
- Slurm jobs have stdout and stderr output log files combined by default. For writing to a separate file, user can provide
-eoption. In Moab, stdout and stderr would go to different files by default, and they were merged with
- Slurm can send email when your job reaches certain percentage of walltime limit. Ex:
sbatch --mail-type=TIME_LIMIT_90 myjob.txt
- The default memory request on Slurm is 1 GB/core. To request all the memory on a node, include
- Requesting a number of nodes or cores is structured differently. To request an exact number of nodes, use
-N. To request an exact number of cores per node, use
--ntasks-per-node. To request a total number of cores, use
- The commands that you use to submit and manage jobs on the cluster are different for Slurm than they were for Moab. To submit jobs, you will now use
sruncommands. To check the status, you will most commonly use
- Arrays are given
SLURM_ARRAY_JOB_IDfor the parent job and each child job gets a
SLURM_JOB_ID. Moab would assign the same
PBS_JOBIDto each job with a different index.
- To include environment variables for naming output files in Slurm, you need to use file patters as follows: Job name
%x, Job id
%j, Job array id
srunis the standard Slurm command to start an MPI program. It automatically uses the allocated job resources: nodelist, tasks, logical cores per task. Do not use
Do not use
mpiexec with Slurm. Use
This table lists the most common commands, environment variables, and job specification options used by the major workload management systems. Users can refer to this cheat sheet for converting their PBS scripts to SLURM scripts and user commands. A full list of SLURM commands can be found here. Further guidelines on more advanced scripts are in the user documentation on this page.
The following PBS script commands can be rewritten as a SLURM script below.
Job Submission Examples¶
(Multiple srun can execute in parallel using &)