GPU nodes


All GPUs have been migrated to our LSF10-Setup running Scientific-Linux-7.3. So you have
to connect to login2.hpc.dtu.dk or login2.gbar.dtu.dk or use the LSF-xterm within thinlinc to
get access to our GPUs.

Available GPUs

The following NVIDIA GPUs are currently available as part of the DCC managed HPC clusters:

# GPUsNameYearArchitectureCUDA cap.CUDA coresClock MHzMem GiBSP peak GFlopsDP peak GFlopsPeak GB/s
5Tesla K40c2013GK110B (Kepler)3.52880745 / 87511.174291 / 50401430 / 1680288
8Tesla K80c (dual)2014GK210 (Kepler)3.72496562 / 87511.172796 / 4368932 / 1456240
8*TITAN X2016GP102 (Pascal)6.135841417 / 153111.9010157 / 10974317.4 / 342.9480
22Tesla V1002017GV100 (Volta)7.05120138015.75141317065898
12Tesla V100-SXM22018GV100 (Volta)7.05120153031.72156677833898


*Please note that the NVIDIA consumer GPUs TITAN X do not support ECC.

In addition, we have 1 Xeon-Phi node with 2×Intel Xeon Phi 5110P accelerators (60 cores, 8 GB memory), which can be used for testing purposes.

Running interactively on GPUs


At the moment, there are currently two kind of nodes available for running interactive jobs on NVIDIA GPUs: Tesla V100 and Tesla V100-SXM2, both based on the Volta architecture. To run interactively on on a Tesla V100 node, you can use the command

voltash

This node has 2 Nvidia-Volta-100 GPUs, each with 16GB of memory.
To run interactively on on a Tesla V100-SXM2 node, you can use the command

sxm2sh

This node has 4 Nvidia-Volta-100 GPUs, each with 32GB of memory.

Please note that multiple users are allowed on these nodes, and all users will be able to access all the GPUs on the node. We have set the GPUs to the “Exclusive process” runtime mode, which means that you will encounter a “device not available” (or similar) error, if someone is using the GPU you are trying to access.

In order to avoid too many conflicts we ask you to follow this code-of-conduct:

  • Please monitor which GPUs are currently occupied using the command nvidia-smi and predominantly select unoccupied GPUs (e.g., using cudaSetDevice()) for your application.
  • If you need to run on all CPU cores, e.g., for performance profiling, please make sure that you are not disturbing other users.
  • We kindly ask you to use the interactive nodes mainly for development, profiling, and short test jobs.
  • Please submit ‘heavy’ jobs into the gpu-queue and don’t use the interactive nodes for heavy stuff

If you have further questions or issues using the GPUs please write to support@hpc.dtu.dk.

Requesting GPUs under LSF10 for non-interactive use

For submitting jobs into the LSF10-setup, please follow these instructions:
Using GPUs under LSF10

If you have further questions or issues using the GPUs please write to support@hpc.dtu.dk.