Skip to content

GPU Nodes

Created by Potthoff, Sebastian, last modified on 21. Feb 2023

Available GPU partitions:

Name GPU Compute Capability VRAM CUDA Cores GFLOPS SP GFLOPS DP CPU Arch # Nodes # GPUs / node Max. CPUs / node Max. Mem / node Max. Walltime
gpuv100 Tesla V100 7.0 16GB HBM2 5120 14899 7450 Skylake (Gold 5118) 1 4 24 192 GB 2 days
vis-gpu Titan XP 6.1 12GB GDDR5X 3840 10790 337 Skylake (Gold 5118) 1 8 24 192 GB 2 days
gpu2080 GeForce RTX 2080 Ti 7.5 11GB GDDR6 4352 11750 367 Zen3 (EPYC 7513) 5 8 32 240 GB 7 days
gpuexpress GeForce RTX 2080 Ti 7.5 11GB GDDR6 4352 11750 367 Zen3 (EPYC 7513) 1 8 32 240 GB 2 hours
gputitanrtx Titan RTX 7.5 24GB GDDR6 4608 12441 388 Zen3 (EPYC 7343) 1 4 32 240 GB 7 days
gpua100 A100 8.0 40GB HBM2 6912 19500 9700 Zen3 (EPYC 7513) 5 4 32 240 GB 7 days
gpuhgx A100 SXM 8.0 80GB HBM2 6912 19500 9700 Zen3 (EPYC 7343) 2 8 64 990 GB 7 days
gpu3090 GeForce RTX 3090 8.6 24GB GDDR6x 10496 29284 458 Zen3 (EPYC 7413) 2 8 48 240 GB 7 days

If you want to use a GPU for your computations: - Use one of the gpu... partitions (see table above) - Choose a GPU according to your needs: single- or double-precision, VRAM etc. - You can use the batch system to reserve only some of the GPUs. Use Slurm's generic resources for this. You can for example write #SBATCH --gres=gpu:1 to get only one GPU. Reserve CPUs accordingly.

Below you will find an example job script which reserves 6 CPUs and 1 GPU on the gpuv100 partition.

Note: Please do not reserve a whole GPU node if you are not using all available GPUs and do not need all available CPU cores! This allows more users to use the available GPUs at the same time!

gpuexpress

You can maximally require 1 GPU, 4 CPU cores and 29G of RAM on this node (such that 1 node can accommodate 8 jobs).

GPU Job Script:

#!/bin/bash

#SBATCH --nodes=1
#SBATCH --tasks-per-node=6
#SBATCH --partition=gpuv100
#SBATCH --gres=gpu:1
#SBATCH --time=00:10:00

#SBATCH --job-name=MyGPUJob
#SBATCH --output=output.dat
#SBATCH --mail-type=ALL
#SBATCH --mail-user=your_account@uni-muenster.de

# load modules with available GPU support (this is an example, modify to your needs!)
module load fosscuda
module load TensorFlow

# run your application
...