Updated 2019-02-14

Run Fluent on the Cluster - Batch Mode

Overview

  • Fluent can be run in either batch mode or interactively
  • In the PBS script load ansys with module load ansys/17.0. You can use module avail ansys to see what versions of julia are available.
  • In the computation part of your PBS script, enter the folder where you have stored the input files (using cd). If it is in the same dir you have/are submitting the PBS script from, you can use cd $PBS_O_WORKDIR
  • To run fluent on the input files, use fluent -t x -g <inputfile> outputfile. There are additional options that will be covered later.
  • You MUST use the -t flag, and set it to the number of processors you requested (nodes=x * ppn=x)

Example PBS Script

#PBS -N fluentTest
#PBS -l nodes=1:ppn=8
#PBS -l pmem=8gb
#PBS -l walltime=2:00:00
#PBS -q force-6
#PBS -j oe
#PBS -o fluent.out

cd $PBS_O_WORKDIR
module load ansys/17.0
fluent -t8 -g <inputfile> outputfile
  • The #PBS directives are standard, requesting just 1 minute of walltime and 1 node with 8 cores. More on #PBS directives can be found in the PBS guide
  • $PBS_O_WORKDIR is simply a variable that represents the directory you submit the PBS script from. Make sure the julia script you want to run (in this case, test.jl) is in the same directory you put the PBS script.
  • fluent -t 8 -g <inputfile> outputfile runs the input file:
    • -t 8: specifies the number of processors to use. Must be set to the number you requested when you submitted the VNC job.
    • -g: tells fluent to run without gui
    • <inputfile>: journal file that contains same fluent commands as you would type interactively. Can be from previous session or created in text editor. More on input files here
    • outputfile: where anything normally printed to the screen will be stored (such as reports), along with errors

Additional Options

  • 2d: run 2 dimensional, single precision solver
  • 3d: run 3 dimensional, single precision solver
  • 2ddp: run 2 dimensional, double precision solver
  • 3ddp: run 3 dimensional, double precision solver
  • For mpi job, you MUST specify the number of processes using -t and the machinefile -cnf=
    • -mpi=pcmpi for smaller number of nodes
    • -mpi=intel for larger number of nodes
    • -pib for infiband