Updated 2023-03-31
Run Java on the Cluster¶
Overview¶
- This guide will focus on running java normally. Mpj (mpj express) is available on the cluster if you wish to run Java in parallel.
Tips¶
- In the SBATCH script, load the java development kit with
module load jdk/1.8.0
, you can usemodule spider jdk
to see what versions of java are available. - Make sure to
compile
andrun
java as you normally would. Include the compile and run steps after the #SBATCH directives in the SBATCH script. - As always, make sure your java program and any files needed are all in the same folder.
Walkthrough: Run Java on the Cluster¶
- This walkthrough will use a simple Java program that prints "Hello World"
- In order to create/run the program we will need both a Java program file, as well as a SBATCH script to submit our job - both can be found below.
SBATCH
Script can be found here.helloWorld.java
can be found here- You can transfer the files to your account on the cluster if necessary to follow along. The file transfer guide may be helpful.
Part 1: The Java Program¶
helloWorld.java
public class helloWorld {
public static void main(String[] args) {
System.out.println("Hello World");
}
}
The SBATCH script below assumes your file is saved as helloWorld.java
, if you choose a different file name you will need to update the commands accordingly.
Part 2: The SBATCH Script¶
hellowWorldTest.sbatch
#!/bin/bash
#SBATCH -JhelloWorldJava
#SBATCH -A [Account]
#SBATCH -N1 --ntasks-per-node=2
#SBATCH --mem-per-cpu=2G
#SBATCH -t1
#SBATCH -qinferno
#SBATCH -oReport-%j.out
cd $SLURM_SUBMIT_DIR
module load jdk/1.8.0
javac helloWorld.java
java -cp $SLURM_SUBMIT_DIR helloWorld
- Note the classpath MUST be specified to point to the absolute path of the compiled class file.
- The
#SBATCH
directives are standard, requesting just 1 minutes of walltime and 1 node with 2 cores. More on #SBATCH directives can be found in the Using Slurm on Phoenix Guide $SLURM_SUBMIT_DIR
is a variable that represents the directory you submit the SBATCH script from. Make sure the java script you want to use (in this case, helloWorld.java) are in the same directory you put the SBATCH script in.- Output File will also show up in this directory as well (in this case, the helloWorld.class file)
module load jdk/1.8.0
loads the 1.8.0 version of the Java JDK. To see what what Java JDK JDK versions are available, runmodule spider jdk
, and load the one you want.javac helloWorld.java
is the line that compiles the java file, so it can be runjava helloWorld
runs the program
Part 3: Submit Job and Check Status¶
- Make sure you're in the directory that contains the
SBATCH
Script as well as thejava
program - Submit as normal, with qsub
<sbatch scriptname.sbatch>
. In this casesbatch hellowWorldTest.sbatch
- Check job status with
squeue --job <jobID>
, replacing with the jobid returned after running sbatch - You can delete the job with
scancel <jobID>
, replacing with the jobid returned after running sbatch
Part 4: Collecting Results¶
- In the directory where you submitted the
SBATCH
script, you should see aReport-<jobID>.out
file, which contains the results of the job. Use cat hReport-<jobID>.out
or open the file in a text editor to take a look. Report-<jobID>.out
should look like this:
---------------------------------------
Begin Slurm Prolog: Nov-17-2022 16:11:00
Job ID: 83855
User ID: svangala3
Account: phx-pace-staff
Job name: helloWorldJava
Partition: cpu-small
QOS: inferno
---------------------------------------
Hello World
---------------------------------------
Begin Slurm Epilog: Nov-17-2022 16:11:01
Job ID: 83855
Array Job ID: _4294967294
User ID: svangala3
Account: phx-pace-staff
Job name: helloWorldJava
Resources: cpu=2,mem=4G,node=1
Rsrc Used: cput=00:00:02,vmem=52260K,walltime=00:00:01,mem=0,energy_used=0
Partition: cpu-small
QOS: inferno
Nodes: atl1-1-02-012-32-1
---------------------------------------
- After the result files are produced, you can move the files off the cluster, refer to the file transfer guide for help.
- Congratulations! You successfully ran a Java program on the cluster.