German Centre for Integrative Biodiversity Research (iDiv)
Halle-Jena-Leipzig
 

High-Performance Computing (HPC) Cluster

Orange parts belong to iDiv. Compute Nodes: Users submit jobs into a batch-queueing system (SGE) which automatically distributes them to the compute servers according to a set of rules.
Frontend Nodes: One node for login, submitting jobs, and software testing. Three nodes for tasks which cannot be handled by the queueing system.
Data Storage: A cluster file system (GPFS) allows all nodes to access the same files simultaneously with high performance.
Orange parts belong to iDiv. Compute Nodes: Users submit jobs into a batch-queueing system (SGE) which automatically distributes them to the compute servers according to a set of rules.
Frontend Nodes: One node for login, submitting jobs, and software testing. Three nodes for tasks which cannot be handled by the queueing system.
Data Storage: A cluster file system (GPFS) allows all nodes to access the same files simultaneously with high performance.
The HPC cluster at the UFZ was extended in May 2014.
The HPC cluster at the UFZ was extended in May 2014.

The iDiv High-Performance Computing (HPC) cluster is a large parallel computing system. It provides computational power for highly demanding tasks like large simulations or the analysis of huge amounts of data. The HPC cluster consists of powerful computers which are connected via a fast network and attached to a big shared storage system. The cluster is located at the UFZ in Leipzig and is available to all iDiv scientists.

Applications

The main applications of the cluster are compute-intensive tasks and big-data analyses, e.g. remote sensing data or high-throughput sequencing data. Examples for such time consuming analyses include:

  • remote sensing satellite image processing
  • classification of animals in images and videos from camera traps
  • de novo assembly
  • analysis of single nucleotide polymorphisms (SNPs)
  • analysis and modeling in population genetics

Resource Overview

The main compute hardware of the HPC cluster comprises a) 44 compute nodes with dual socket Intel Xeon E5-2690 v4 CPUs with 256 Gigabytes of DDR4 main memory, two of which include NVIDIA Tesla K80 GPGPUs, and b) 27 compute nodes with dual socket Intel Xeon Gold 6148 CPUs with up to 1,536 Gigabytes of DDR4 main memory, two of which include NVIDIA Tesla V100 GPGPUs. The central network component of the cluster is an Intel Omni-Path 100 Series high performance interconnect, providing all compute nodes with non-blocking EDR bandwidth (100 Gigabit per second). All compute nodes share are 2.5 Petabyte IBM Spectrum Scale file system. The system performed with over 35 teraFLOPS under a High-Performance LINPACK (HPL) benchmark.

In summation, the raw numbers are:

  • 71 compute nodes
  • 2312 CPU cores and 27.4 Terabyte main memory
  • 20224 CUDA cores and 80 Gigabyte graphics memory
  • 2.5 Petabyte storage

Contact

Christian Krause (HPC Cluster Admin)

christian.krause@idiv.de

Share this site on:
iDiv is a research centre of theDFG Logo
toTop