Skip to content

aa3025/openfoam

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 

example of openfoam slurm scripts for parametric study for HPC

git clone https://github.coventry.ac.uk/aa3025/openfoam.git

E.g. you have one openfoam case, which you need to run N times for different values of some parameter (e.g. Re number in the example). We can setup one case as a template, and launch it once for a given range of Re numbers, which will launch N jobs, one for each value of Re. Can be adapted for a sweep along a different parameter instead of Re.

  1. run "sh make_execs.sh" to make all .sh files executable in the "openfoam" folder

  2. edit start.sh to change Re numbers range

  3. setup OpenFOAM case (e.g. controlDict, blockmeshDict, decomposePar etc) in "template" folder

  4. submit to the queue with sbatch: "./start.sh", the script will start submitting separate slurm jobs to the queue for different Re numbers, creating a folder for each of them from "template" case folder.

  5. When job is running, the results are saved on the local disk of the compute node (eliminates slow "/home" folder bottlenecks), in /scratch, so don't expect to see much in submission folder except of slurm.out files from slurm. If needed ssh to the target node and check /scratch folder instead.

  6. When job finishes, results are reconstructed in parallel on all booked CPUs, processor* folders deleted, results are compressed as .tar.bz2 and the archives are copied back into submission folder, scratch folders on the nodes are deleted.

  7. in the current implementation you can't use more than one compute node per job, as reconstructPar need access to all the processor* folder on the local disk of the nodes.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published