This repo includes a simple script to verify if Tensorflow installation is correct, as well as tests if GPU device is recognized and being used by Tensorflow.
This basically a tutorial on how to train the models on HPC and submitting your slurm jobs as well. All that needs to be changed is the name of Python file at the end of 'gpu.slurm' script.
Once the conda virtual environment is setup with Tensorflow-gpu:
$ conda install -c anaconda tensorflow-gpu
- submit the slurm job (script taken from here: http://hpc.coventry.domains/software/cuda-and-gpu-use-on-hpc/submitting-gpu-based-job/)
$ sbatch gpu.slurm
This will submit a slurm job and run the python file 'gputest.py' which checks the installation of Tensorflow and if GPU is being used
- There should appear an output file called 'slurm-XXXXXXX.out' in your directory
- To display the contents of the file in console
$ cat NAME_OF_SLURM_OUTPUT_FILE
- If the GPU and Tensorflow are installed correctly, you should see something like this at the end
Tensorflow version: 2.4.1 Default GPU device: /device:GPU:0
- To verify more than 1 GPUs being used (i.e. 2):
Firstly, change a line in the 'gpu.slurm' file to this:
Then, after running the 'gpu.slurm' file as explained in step 1, you should see a line in your new output file like this:
Adding visible gpu devices: 0, 1 Tensorflow version: 2.4.1 Default GPU device: /device:GPU:0