The iDiv High-Performance Computing (HPC) cluster is a large parallel computing system. It provides computational power for highly demanding tasks like large simulations or the analysis of huge amounts of data. The HPC cluster consists of powerful computers which are connected via a fast network and attached to a big shared storage system. The cluster is located at the UFZ in Leipzig and is available to all iDiv scientists.
The main applications of the cluster are compute-intensive tasks and big-data analyses, e.g. remote sensing data or high-throughput sequencing data. Examples for such time consuming analyses include:
- remote sensing satellite image processing
- classification of animals in images and videos from camera traps
- de novo assembly
- analysis of single nucleotide polymorphisms (SNPs)
- analysis and modeling in population genetics
The main compute hardware of the HPC cluster comprises a) 44 compute nodes with dual socket Intel Xeon E5-2690 v4 CPUs with 256 Gigabytes of DDR4 main memory, two of which include NVIDIA Tesla K80 GPGPUs, and b) 27 compute nodes with dual socket Intel Xeon Gold 6148 CPUs with up to 1,536 Gigabytes of DDR4 main memory, two of which include NVIDIA Tesla V100 GPGPUs. The central network component of the cluster is an Intel Omni-Path 100 Series high performance interconnect, providing all compute nodes with non-blocking EDR bandwidth (100 Gigabit per second). All compute nodes share are 2.5 Petabyte IBM Spectrum Scale file system. The system performed with over 35 teraFLOPS under a High-Performance LINPACK (HPL) benchmark.
In summation, the raw numbers are:
- 71 compute nodes
- 2312 CPU cores and 27.4 Terabyte main memory
- 20224 CUDA cores and 80 Gigabyte graphics memory
- 2.5 Petabyte storage
Christian Krause (HPC Cluster Admin)