Hardware
Topology

HPC Interconnect

In all High Performance and Supercomputer systems, the HPC interconnect is of paramount importance. In contrast to server farms or cloud computing, the interconnect provides fast, low-latency and high throughput communication between all compute nodes.

MPI programs use the interconnect to utilize many compute nodes in parallel for large and complex calculations. By means of the interconnect, the MPI tasks convey data, states and intermediary results among each other.

Storage HDR (former EDR)

Infiniband HDR: The storage system (Home, Projects, Scratch) is the core of the interconnect fabric, and manages the whole system based on HDR: „High Data Rate“ = 100 GBit/s.

Lichtenberg II Cluster Stage 2

Infiniband HDR: in cluster stage 2 of the Lichtenberg II cluster, the compute and login nodes have Mellanox ConnectX cards of the “HDR100” standard.
“High Data Rate” 100 = 100 GBit/s.

While HDR specifies up to 200 GBit/s, the stage 2 nodes only have cards with 100 GBit/s.

The internal IB topology of this stage 2 is 1:1 non-blocking: any given (group of) compute nodes can communicate completely unharmed by any other (group of) compute node's network usage. In other words, nowhere in the IB network of LB2A2 is the number of interconnect links lower than “one per compute node”.

Lichtenberg II Cluster Stage 1

Infiniband HDR: in cluster stage 1 of the Lichtenberg II cluster, the compute and login nodes have Mellanox ConnectX cards of the “HDR100” standard.
“High Data Rate” 100 = 100 GBit/s.

While HDR specifies up to 200 GBit/s, the cluster stage 1 nodes only have cards with 100 GBit/s.

The internal IB topology of this cluster stage 1 is 1:1 non-blocking: any given (group of) compute nodes can communicate completely unharmed by any other (group of) compute node's network usage. In other words, nowhere in the IB network of LB2A1 is the number of interconnect links lower than “one per compute node”.

The former island structure of the LB1 in contrast to the modern, flat topology of the Lichtenberg II.
The former island structure of the LB1 in contrast to the modern, flat topology of the Lichtenberg II.

Further details on the Lichtenberg-HPC components can be found in our Hardware Overview .