HPC Clusters


The HPC hardware hosted by CC constitutes general compute and storage resources available to all staff and students at DTU as well as resources dedicated to specific purposes with limited access for prioritized groups of users only. Members of these groups are appointed by the “resource/cluster owner”. Some of the dedicated compute resources are made available to other users (by pooling resources), if not utilized by the prioritized users.

Current HPC compute resources (clusters) are:

For a description of the various queues please refer to “HPC Queues Parameters” page.

Besides from the HPC compute nodes, “application servers” are provided in the G databar. For more information regarding the Application servers, please consult the following link:

And we also manage a number of Accelerators, which is available at the following link:

 

For the above mentioned nodes, we provide the following HPC Storage resources:

  • Fraunhofer Parallel Cluster File System (FhGFS) also called “Scratch” – 62 TB – read more about the Scratch storage here
  • Fraunhofer Parallel Cluster File System (FhGFS) also called “work1” – 42 TB – read more about the Scratch storage here
  • ZFS home also called “Linux Home” – 44 TB – read more about the Linux Home storage here

 

The current DCC HPC architecture is illustrated in the figure below.

Infrastructure_Overview

Overview of the DCC HPC setup