Phoenix Migration to Slurm¶
The Phoenix cluster migrated to the Slurm scheduler in October 2022 to January 2023. PACE has worked closely with the PACE Advisory Committee (PAC) on the plan for the migration to ensure minimum interruption to research.
Join a PACE Slurm Orientation session to learn more about using Slurm on Phoenix.
The Phoenix-Slurm cluster features a new set of applications provided in the PACE Apps software stack. Please review this list of software we offer on Phoenix post-migration and let us know via email if any software you are currently using on Phoenix is missing from that list. We encourage you to let us know as soon as possible to avoid any potential delay to your research as the migration process concludes. We have reviewed batch job logs to determine packages in use and upgraded them to the latest version.
To load the default/latest version of a software package, use
module load without specifying a version number, e.g.,
module load anaconda3.
Researchers installing or writing their own software will also need to recompile applications to reflect new MPI and other libraries.
To access Phoenix-Slurm via ssh, use the address
login-phoenix.pace.gatech.edu. This will provide access to the new environment and your existing Phoenix home, project, and scratch storage.
$ ssh email@example.com
As with the existing Phoenix cluster, Phoenix-Slurm will require charge accounts in order to track usage and charges (when using inferno). To find your Phoenix-Slurm charge accounts, run
pace-quota while logged into the Phoenix-Slurm cluster. Charging accounts will be of the form
gts-<PI username>, e.g.,
gts-gburdell3 for researchers in Prof. Burdell's group. Note that the prefix "gts-" is different than the prefix "GT-" used on the existing Phoenix cluster.
Researchers working with more than one faculty member may have multiple charge accounts and should choose the one that best fits the project supervisor for each job run.
Add charge accounts to your Slurm requests with the
QOS Replacing Queues¶
The existing Phoenix cluster used two queues, inferno (paid queue) or embers (free backfill queue), to submit jobs. On Phoenix-Slurm, queues will be replaced by requesting a Quality of Service (QOS) - either inferno and embers. Jobs are assigned to Slurm partitions automatically based on your charge account and resources requested. The Phoenix-Slurm accounting system will use the assigned partition to charge you based on your charge account, QOS, and resources requested for your job using current rates.
Vist our Slurm usage on Phoenix guide to learn Slurm commands and find example scripts, and visit our conversion guide for detailed instructions on converting existing PBS scripts to Slurm scripts.
Open OnDemand, Jupyter, and VNC¶
The Phoenix OnDemand portal now supports the Slurm scheduler! You can access all OnDemand apps, including Jupyter and Interactive Desktop, from the “Phoenix Slurm Interactive Apps” menu. Learn more about Open OnDemand in our guide.
pace-vnc-job commands will be retired with the migration to Slurm. Please use OnDemand to access Jupyter notebooks, VNC sessions, and more on Phoenix-Slurm via your browser.