Run Fluent on the Cluster - Batch Mode¶
- Fluent can be run in either batch mode or interactively
- In the
PBSscript load ansys with
module load ansys/17.0. You can use
module avail ansysto see what versions of julia are available.
- In the computation part of your
PBSscript, enter the folder where you have stored the input files (using
cd). If it is in the same dir you have/are submitting the
PBSscript from, you can use
- To run fluent on the input files, use
fluent -t x -g <inputfile> outputfile. There are additional options that will be covered later.
- You MUST use the
-tflag, and set it to the number of processors you requested (nodes=x * ppn=x)
Example PBS Script¶
#PBS -N fluentTest #PBS -l nodes=1:ppn=8 #PBS -l pmem=8gb #PBS -l walltime=2:00:00 #PBS -q force-6 #PBS -j oe #PBS -o fluent.out cd $PBS_O_WORKDIR module load ansys/17.0 fluent -t8 -g <inputfile> outputfile
#PBSdirectives are standard, requesting just 1 minute of walltime and 1 node with 8 cores. More on
#PBSdirectives can be found in the PBS guide
$PBS_O_WORKDIRis simply a variable that represents the directory you submit the PBS script from. Make sure the julia script you want to run (in this case,
test.jl) is in the same directory you put the PBS script.
fluent -t 8 -g <inputfile> outputfileruns the input file:
-t 8: specifies the number of processors to use. Must be set to the number you requested when you submitted the VNC job.
-g: tells fluent to run without gui
<inputfile>: journal file that contains same fluent commands as you would type interactively. Can be from previous session or created in text editor. More on input files here
outputfile: where anything normally printed to the screen will be stored (such as reports), along with errors
2d: run 2 dimensional, single precision solver
3d: run 3 dimensional, single precision solver
2ddp: run 2 dimensional, double precision solver
3ddp: run 3 dimensional, double precision solver
- For mpi job, you MUST specify the number of processes using
-tand the machinefile
-mpi=pcmpifor smaller number of nodes
-mpi=intelfor larger number of nodes