Skip to content
master
Go to file
Code

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time

README.md

Tensorflow-gpu-test

This repo includes a simple script to verify if Tensorflow installation is correct, as well as tests if GPU device is recognized and being used by Tensorflow.

This basically a tutorial on how to train the models on HPC and submitting your slurm jobs as well. All that needs to be changed is the name of Python file at the end of 'gpu.slurm' script.

Instructions

Once the conda virtual environment is setup with Tensorflow-gpu:

$ conda install -c anaconda tensorflow-gpu
  1. submit the slurm job (script taken from here: http://hpc.coventry.domains/software/cuda-and-gpu-use-on-hpc/submitting-gpu-based-job/)
$ sbatch gpu.slurm

This will submit a slurm job and run the python file 'gputest.py' which checks the installation of Tensorflow and if GPU is being used

  1. There should appear an output file called 'slurm-XXXXXXX.out' in your directory
  2. To display the contents of the file in console
$ cat NAME_OF_SLURM_OUTPUT_FILE
  1. If the GPU and Tensorflow are installed correctly, you should see something like this at the end
Tensorflow version: 2.4.1
Default GPU device: /device:GPU:0
  1. To verify more than 1 GPUs being used (i.e. 2):
    Firstly, change a line in the 'gpu.slurm' file to this:
#SBATCH --gres=gpu:K20:2

Then, after running the 'gpu.slurm' file as explained in step 1, you should see a line in your new output file like this:

Adding visible gpu devices: 0, 1
Tensorflow version: 2.4.1
Default GPU device: /device:GPU:0

About

This repo includes a simple script to verify if Tensorflow installation is correct, as well as tests if GPU device is recognized.

Resources

Releases

No releases published
You can’t perform that action at this time.