example of openfoam slurm scripts for parametric study for HPC
git clone https://github.coventry.ac.uk/aa3025/openfoam.git
E.g. you have one openfoam case, which you need to run N times for different values of some parameter (e.g. Re number in the example). We can setup one case as a template, and launch it once for a given range of Re numbers, which will launch N jobs, one for each value of Re. Can be adapted for a sweep along a different parameter instead of Re.
-
run "sh make_execs.sh" to make all .sh files executable in the "openfoam" folder
-
edit start.sh to change Re numbers range
-
setup OpenFOAM case (e.g. controlDict, blockmeshDict, decomposePar etc) in "template" folder
-
submit to the queue with sbatch: "./start.sh", the script will start submitting separate slurm jobs to the queue for different Re numbers, creating a folder for each of them from "template" case folder.
-
When job is running, the results are saved on the local disk of the compute node (eliminates slow "/home" folder bottlenecks), in /scratch, so don't expect to see much in submission folder except of slurm.out files from slurm. If needed ssh to the target node and check /scratch folder instead.
-
When job finishes, results are reconstructed in parallel on all booked CPUs, processor* folders deleted, results are compressed as .tar.bz2 and the archives are copied back into submission folder, scratch folders on the nodes are deleted.
-
in the current implementation you can't use more than one compute node per job, as reconstructPar need access to all the processor* folder on the local disk of the nodes.