Updated 2023-03-31

Gurobi

Run Gurobi on the Cluster - Batch Mode

Overview

  • There are a couple of options for running Gurobi:
    • The command line tootl, gurobi_cl (aka batch mode)
    • Gurobi interactive shell (extension of python shell)
    • As a python library
  • This guide will focus on the command line tool gurobi_cl
  • Using gurobi_cl, you can submit jobs in batch mode (can run without supervision)

Tips

  • In the SBATCH script, load gurobi with module load gurobi/9.5.2. You can use module spider gurobi to see what versions of gurobi are available.
  • Gurobi_cl can take in many different models, with languages such as Java, C, C++, Matlab, R etc. Gurobi can also take in .lp files. For more information on what you can run with gurobi_cl, refer to the gurobi documentation
  • Important: In the line to run gurobi, gurobi_cl ResultFile=<filename>.sol Threads=4 <model_name>.lp

Warning

You must set Threads to the number of processors you requested in the #SBATCH directives.

Walkthrough: Run Gurobi on the Cluster

  • This walkthrough will use an example optimization problem straight from the gurobi documentation
  • Simply, the problem looks to maximize the total value of coins to mint given a limited amount of metal
  • Input model (from Gurobi Documentation): coins.lp
  • SBATCH script: gurobi.sbatch
  • You can transfer the files to your account on the cluster to follow along. The file transfer guide may be helpful.

Part 1: The SBATCH Script

#!/bin/bash
#SBATCH -JgurobiTest
##SBATCH -A [Account] 
##SBATCH -N 1 --ntasks-per-node=4
#SBATCH --mem-per-cpu=2G
#SBATCH -t10
#SBATCH -qinferno
#SBATCH -oReport-%j.out

cd $SLURM_SUBMIT_DIR
module load gurobi/9.5.2
gurobi_cl ResultFile=coins.sol Threads=4 coins.lp
  • The #SBATCH directives are standard, requesting just 10 minutes of walltime and 1 node with 4 cores. More on #SBATCH directives can be found in the Using Slurm on Phoenix Guide
  • $SLURM_SUBMIT_DIR is is simply a variable that represents the directory you submit the SBATCH script from. Make sure the .lp model you want to run (in this case, coins.lp) is in the same directory you put the SBATCH script.
  • module load gurobi/9.5.2 loads the 9.5.2 version of Gurobi. To see what Gurobi versions are available, run module spider gurobi, and load the one you want
  • gurobi_cl runs the gurobi command line tool. ResultFile is an optional parameter, and prints the result of optimization to a .sol file. More info on parameters can be found in documnetation here
  • You must set Threads to the number of processors you requested in the top of the #SBATCH directives.

Part 2: Submit Job and Check Status

  • Make sure you're in the dir that contains the SBATCH Script
  • Submit as normal, with sbatch . In this case sbatch gurobi.sbatch
  • Check job status with squeue --job <jobID>
  • You can delete the job with scancel <jobID> , replacing with the jobid returned after running sbatch

Part 3: Collecting Results

  • In the directory where you submitted the SBATCH script, you should see all the generated files, including coins.sol, gurobi.log, and Report-<joID>.out
  • Use cat coins.sol or open the file in a text editor to take a look
  • coins.sol should look like this:
# Objective value = 113.45
Pennies 0
Nickels 0
Dimes 2
Quarters 53
Dollars 100
Cu 999.8
Ni 46.9
Zi 50
Mn 30
  • The optimization problem has been solved, and the ideal solution is to mint 100 Dollar coins, 53 quarters, and 2 dimes.
  • After the result files are produced, you can move the files off the cluster, refer to the file transfer guide
  • Congratulations! You successully ran the gurobi_cl tool on the cluster.