Updated 2021-09-03

PACE-ICE (Instructional Cluster)

What is ICE?

Note

First things first: Yes, this resource is completely free and available to any Georgia Tech Faculty member to teach classes or workshops. All you need is to apply in advance with course-specific information. If you are interested, please read on.

In 2018, PACE initiated a project called “Instructional Cluster Environment (ICE)” to build instructional clusters to support educational efforts per a high demand campus-wide. These resources offer an educational environment that’s identical to production clusters, and expected to provide thousands of grad/undergrad students with ample opportunities to gain first-hand scientific computing experience including HPC and GPU programming each year. Furthermore, the entire PACE scientific software repository is made accessible to all ICE students, providing an education environment that mirrors production research clusters in every aspect.

In addition to credit-bearing courses, ICE enables PACE Research Scientists to develop or host hands-on tutorials and workshops to help GT researchers and students improve their computational skills.

Currently there are two ICE clusters with important differences:

  1. PACE-ICE: Campus-wide service funded by PACE, requires application process. Funded partly by a Technology fee grant and partly PACE budget. Open to all academic courses outside the College of Computing at no charge.
  2. COC-ICE: CoC-specific service for CoC classes and those that are cross listed with this department. Funded by Technology fee grant by CoC. Access is managed by the College of Computing TSO.

How to Apply to Use ICE for Your Classes

If your class is offered by a school outside of CoC: please fill out this application

If you are teaching a CoC class, or if your class is cross listed with a CoC school: please directly contact David Mercer (david.mercer@cc.gatech.edu) from the TSO

All GT faculty are potentially eligible for using ICE resources for their classes, but the application mechanism is necessary to ensure fair use of limited resources. Once the applications are collected, PACE identifies which classes can be supported. The decision is communicated in advance to schedule orientation sessions and completing queue configurations.

PACE does not provide direct support for students in classes using ICE. Instructors and TAs are responsible for student support. Instructors, TAs, and distributed IT professionals supporting courses on ICE may contact PACE Support.

Important Dates for Fall 2021

Information Sessions:

Announcement of Approved Classes: August 3

Applications received after July 30 will be reviewed based on availability of resources. Please contact us if you have any questions.

Orientation Sessions: August-September

Course instructors and TAs will be offered orientation sessions, based on availability. You can view the ICE orientation slides.

ICE Resources

PACE-ICE
Quantity CPU Memory GPU Local Scratch
13 Dual Xeon Gold 6226 (24 cores/node, 2.70 GHz) 192GB DDR4 2933 MHz 1.6TB NVMe SSD
3 Dual Xeon Gold 6226 (24 cores/node, 2.70 GHz) 384GB DDR4 2933 MHz 2x Tesla V100 PCIe 16GB 1.9TB SATA SSD

Each node is connected via an InfiniBand HDR100 (100 Gbps) interface to our fabric.

Additionally, each is user is provided with 15GB of storage, which is available from all nodes and is backed up daily.

COC-ICE
Quantity CPU Memory GPU Local Scratch
17 Dual Xeon Gold 6226 (24 cores/node, 2.70 GHz) 192GB DDR4 2933 MHz 1.6TB NVMe SSD
1 Dual Xeon Gold 6226 (24 cores/node, 2.70 GHz) 384GB DDR4 2933 MHz 8TB SAS HDD RAID
5 Dual Xeon Gold 6226 (24 cores/node, 2.70 GHz) 768GB DDR4 2933 MHz 1.6TB NVMe SSD
4 Dual Xeon Gold 6248 (40 cores/node, 2.50 GHz) 192GB DDR4 2933 MHz 1x Tesla V100 PCIe 32GB 512GB SATA SSD
4 Dual Xeon Gold 6248 (40 cores/node, 2.50 GHz) 192GB DDR4 2933 MHz 4x Tesla V100 PCIe 32GB 512GB SATA SSD
6 Dual Xeon Gold 6226 (24 cores/node, 2.70 GHz) 192GB DDR4 2933 MHz 4x Quadro Pro RTX6000 24GB 1.6TB NVMe SSD
8 Dual Xeon Gold 6226 (24 cores/node, 2.70 GHz) 384GB DDR4 2933 MHz 2x Tesla V100 PCIe 16GB 1.9TB SATA SSD

Each node is connected via an InfiniBand HDR100 (100 Gbps) interface to our fabric.

Additionally, each is user is provided with 15GB of storage, which is available from all nodes and is backed up daily.