Updated 2022-09-14

ICE User's Guide


Visit our ICE Overview to learn more about PACE-ICE and COC-ICE services and how instructors can request access. This page offers information about using ICE as a student.

Welcome to our guide for students on PACE-ICE and COC-ICE, PACE's Instructional Cluster Environments. These resources offer an educational environment that matches that of our research clusters and provides thousands of graduate and undergraduate students with ample opportunities to gain first-hand scientific computing experience, including HPC and GPU programming, each year. Furthermore, the entire PACE scientific software repository is made accessible to all ICE students, providing an education environment that mirrors production research clusters.

Most of the pages in our PACE documentation apply to the ICE clusters. Go ahead and browse them while you're here! See below for information specific to PACE-ICE and COC-ICE.

Support Structure

PACE does not provide direct support for students in classes using ICE. Students should contact their course instructors and TAs for assistance with ICE. Instructors, TAs, and distributed IT professionals supporting courses on ICE may contact PACE Support.

Accessing Clusters via Terminal


ICE is now accessible via Open OnDemand, permitting you to conduct all activity from a web browser, including easy access to graphical interactive jobs. See below for more information.

For traditional computing, you will need an SSH client (terminal) to connect to ICE. Recommended options:

  • Windows: Powershell (built-in on Windows 10) or Windows Subsystem for Linux (WSL)

  • Mac: Terminal (built-in)

  • Linux: System-default terminal (gnome/KDE)

You will also need to be connect to Georgia Tech's VPN. Connect to the client-based GlobalProtect VPN before attempting to access PACE resources. The AnyConnect VPN will also work until its retirement.

The clusters are accessed using SSH to the proper login node. For PACE-ICE, use pace-ice.pace.gatech.edu. For COC-ICE, use coc-ice.pace.gatech.edu. Connect with your GT username and password.

For example, from the terminal, enter ssh gburdell3@pace-ice.pace.gatech.edu, then provide your GT password at the prompt. No asterisks will appear, so enter your password then <Enter>.

Structure of a Computational Cluster

  • Head Nodes (Login Nodes)

    • Where you log in
    • Submit jobs from here
    • Can edit and compile small-scale programs
    • Access storage
    • Login nodes are shared by all. Please do not use the login nodes for any resource-intensive activities, as it prevents other students from using the cluster. PACE will stop processes that continue for too long or use too many resources, in order to ensure functionality of the login node. Please use a compute node for all computational work. An interactive job (see below) provides an interactive environment on a compute node.
  • Compute Nodes

    • For running all computations
    • Assigned by the scheduler and accessed only when assigned
    • Access storage
    • May vary in their computational capability
    • For a list of compute nodes on each cluster, visit the ICE Overview.
  • Storage Servers

    • ICE home directories are located on OIT's centralized NetApp storage device.
    • Each user has a quota of 15 GB. Instructors may request additional space for individual accounts if needed.
    • Run the pace-quota command to check your current storage utilization.
    • Transfer files to/from ICE via SCP (Mac/Linux or Windows) or SFTP.
    • COC-ICE and PACE-ICE each have their own storage, which are in turn separate from storage on PACE's research clusters.
  • Scheduler

    • Manages compute job requests and assigns resources on compute nodes
    • Ensures a fair use of shared resources


PACE-ICE and COC-ICE provide access to PACE's full software stack for scientific computing. Computational clusters often have many software packages, which may have conflicting names and versions, so a module system allows you to load only the software you need.

Use module spider to search for software and module load to load it (make it available for use). For more details, read our modules guide.

Software available on ICE includes licensed packages (such as MATLAB) with GT licenses, open source packages, compilers (including GNU and Intel compiler for C/C++ and Fortran), and scripting languages (including Python, Perl, R, and more). Check out the "Software" category in the menu to the left for detailed guides to using dozens of the most popular software packages on PACE. We have many other software packages installed as well, beyond those listed in the software guide.

If you use Python on ICE, we strongly recommend using Anaconda to manage your packages. Our Anaconda guide provides step-by-step instructions. On ICE clusters, skip Step 1 (storage setup).

ICE Queues

To access compute nodes, make a request to the scheduler. Specify the queue that matches your needs.

PACE-ICE Max CPU per Job Max walltime Note
pace-ice 80 8:00:00
pace-ice-gpu 48 8:00:00 Max 2 GPUs per job


The pace-ice-gpu and coc-ice-gpu queues include both TeslaV100 and Quadro RTX6000 GPUs from Nvidia. By default, jobs will be assigned to the first available GPU of either type. For most applications, PACE does not expect you to find a significant difference between the two architectures. If you need to use a specific architecture, you may specify an additional flag in the job submission, using RTX6000 for an RTX6000 and teslav100, TeslaV100, or TeslaV100-16GB for a V100. For example:

  • “nodes=1:ppn=6:gpus=1” will be assigned 1 GPU of either type
  • “nodes=1:ppn=6:gpus=1:RTX6000” will ensure assignment to an RTX6000
  • “nodes=1:ppn=6:gpus=1:teslav100” will ensure assignment to a V100.
COC-ICE Max CPU per Job Max walltime Note
coc-ice 48 2:00:00 Higher priority
coc-ice-gpu 48 2:00:00 For GPU jobs with V100 or RTX6000, higher priority
coc-ice-multi 128 1:00:00 For MPI jobs, lower priority
coc-ice-long 48 8:00:00 Lower priority
coc-ice-devel 128 8:00:00 Limited access, lowest priority
coc-ice-grade 128 12:00:00 Instructors/TAs only, highest priority
ice-epyc-cpu 64 2:00:00 For jobs with AMD EPYC CPUs
ice-epyc-gpu 64 2:00:00 For jobs with AMD EPYC CPUs + Nvidia A100 GPUs


6 COC-ICE nodes contain AMD EPYC CPUs, rather than Intel CPUs. These are available in the ice-epyc-cpu and ice-epyc-gpu queues. The nodes in the GPU queue host Nvidia A100 GPUs as well. To use AMD architecture, please follow these guidelines:

  • The AMD nodes feature a limited software stack, focused on core tools, compilers, and libraries most relevant for CoC students. All code to be run on AMD nodes should be compiled in interactive jobs using the AMD software stack.
  • Since the COC-ICE login nodes feature the Intel software stack, please use an interactive job on an AMD node to compile all code to run on this hardware, rather than compiling on the login node. From the login node, the command qsub -I -q ice-epyc-cpu will start a minimal interactive job for 1 hour which can be used for compiling against the AMD software stack.
    • Additional options can be used with qsub. For example, start an interactive job for 2 hours on 1 CPU with 7 GB of memory with qsub -I -q ice-epyc-cpu -l nodes=1:ppn=1,pmem=7gb,walltime=2:00:00.
    • Within this interactive job, run module spider to find available packages and module load <package> to load them. Then, compile your code.
    • Exit the interactive session with exit.
  • For batch jobs, select ice-epyc-cpu or ice-epyc-gpu as the queue and reference modules available on AMD nodes and code compiled on them. These PBS scripts can be submitted via qsub on the regular login nodes.
  • To request an A100 GPU, use the ice-epyc-gpu queue and add a GPU request, such as qsub -I -q ice-epyc-gpu -l nodes=1:ppn=1:gpus=1. Adding :a100 at this time is optional.

Accessing Computational Resources via Jobs

Request resources from the scheduler to be assigned space on a compute node. For all types of job submissions, the scheduler will assign space to you when it becomes available. Batch and interactive jobs both wait in the same queues for available space. For all job submissions, make sure to specify a queue from the list above.

Batch Jobs

Batch jobs are for "submit and forget" workflows. Write a PBS script with all the commands you need to run, then submit your request to the scheduler with qsub. Batch jobs are ideal for larger (many CPU) and longer (many hour) computations.

Interactive Jobs

Interactive jobs allow interactive use, so you can work "live" and provide additional input as your computations run. Please use interactive jobs instead of the login nodes for intensive computations. ICE offers both command-line interactive jobs and graphical interactive jobs with Open OnDemand (including Jupyter). Graphical interactive jobs are required if you need a graphical user interface (GUI).

Command-Line Interactive Jobs

You can use a command-line interactive job to work on the command line on a compute node. This is ideal for avoiding overuse of the login node while compiling and running test codes.

Open OnDemand

PACE is pleased to offer Open OnDemand on both PACE-ICE and COC-ICE, facilitating easier access to advanced computing resources for students.

OnDemand allows access to PACE via a web browser. It is especially well-suited to graphical interactive jobs, including Jupyter notebooks (for Python or other languages) and anything using a graphical desktop interface.

Visit our OOD guide for details.

Monitoring Tools

  • Check the status of your jobs with qstat. The command qstat -u <username> with your GT username, e.g., qstat -u gburdell3, will show all of your jobs in the queue. You can also use this to check the job number.
  • Cancel a job with qdel and the job number.
  • Check how busy a queue is with pace-check-queue and the queue name, e.g., pace-check-queue pace-ice-gpu.
  • Use pace-quota to check on your storage utilization.