Available GPUs
The following NVIDIA GPUs are currently available as part of the DCC managed HPC clusters:
# GPUs | Name | Year | Architecture | CUDA cap. | CUDA cores | Clock MHz | Mem GiB | SP peak GFlops | DP peak GFlops | Peak GB/s |
---|---|---|---|---|---|---|---|---|---|---|
8 | Tesla M2050 | 2012 | GF100 (Fermi) | 2.0 | 448 | 575 | 2.62 | 1030 | 515 | 148.4 |
6 | Tesla M2070Q | 2012 | GF100 (Fermi) | 2.0 | 448 | 575 | 5.25 | 1030 | 515 | 150.3 |
2 | *GeForce GTX 680 | 2012 | GK104-400 (Kepler) | 3.0 | 1536 | 1058 | 1.95 | 3090 | 128 | 192.2 |
3 | Tesla K20c | 2013 | GK110 (Kepler) | 3.5 | 2496 | 745 | 4.63 | 3524 | 1175 | 208 |
5 | Tesla K40c | 2013 | GK110B (Kepler) | 3.5 | 2880 | 745 / 875 | 11.17 | 4291 / 5040 | 1430 / 1680 | 288 |
8 | Tesla K80c (dual) | 2014 | GK210 (Kepler) | 3.7 | 2496 | 562 / 875 | 11.17 | 2796 / 4368 | 932 / 1456 | 240 |
1 | *GeForce GTX TITAN X | 2015 | GM200-400 (Maxwell) | 5.2 | 3072 | 1076 | 11.92 | 6144 | 192 | 336 |
8 | *TITAN X | 2016 | GP102 (Pascal) | 6.1 | 3584 | 1417 / 1531 | 11.90 | 10157 / 10974 | 317.4 / 342.9 | 480 |
14 | Tesla V100 | 2017 | GV100(Volta) | 7.0 | 5120 | 16 |
*Please note that the NVIDIA consumer GPUs (GForce GTX 680 and GForce GTX TITAN X) as well as TITAN X do not support ECC.
In addition, we have 1 Xeon-Phi node with 2×Intel Xeon Phi 5110P accelerators (60 cores, 8 GB memory), which can be used for testing purposes.
Running interactively on GPUs
There are currently two nodes available for running interactive jobs on NVIDIA GPUs.
Node n-62-17-44 is installed with 2×NVIDIA Tesla M2070Q, which are based on the Fermi architecture (same as NVIDIA Tesla M2050).
To run interactively on this node, you can use the following command:
hpclogin1: $ gpush
This command executes a bash script that submits an interactive job to the gpushqueue queue.
Node n-62-18-47 is installed with 1×NVIDIA GForce GTX TITAN X, 2×NVIDIA Tesla K20c, and 1×NVIDIA Tesla K40c, all based on the Kepler architecture (same as NVIDIA Tesla K80c and NVIDIA GForce GTX 680).
To run interactively on this node, you can use the following command:
hpclogin1: $ k40sh
This command executes a bash script that submits an interactive job to the k40_interactive queue.
Please note that multiple users are allowed on these nodes, and all users will be able to access all the GPUs on the node. We have set the GPUs to the “Exclusive process” runtime mode, which means that you will encounter a “device not available” (or similar) error, if someone is using the GPU you are trying to access.
In order to avoid too many conflicts we ask you to follow this code-of-conduct:
- Please monitor which GPUs are currently occupied using the command
nvidia-smi
and predominantly select unoccupied GPUs (e.g., using cudaSetDevice()) for your application. - If you need to run on all CPU cores, e.g., for performance profiling, please make sure that you are not disturbing other users.
- We kindly ask you to use the interactive nodes mainly for development, profiling, and short test jobs.
If you have further questions or issues using the GPUs please write to support@hpc.dtu.dk.
Requesting GPUs under LSF10
The syntax regarding requesting GPUs in our setup has changed from LSF9 to LSF10.
For submitting jobs into the LSF10-setup, please follow these instructions:
Using GPUs under LSF10
If you have further questions or issues using the GPUs please write to support@hpc.dtu.dk.