Island Topology

Interconnect

In all High Performance and Supercomputer systems, the HPC interconnect is of paramount importance. In contrast to server farms or cloud computing, the interconnect provides fast, low-latency and high throughput communication between all compute nodes.

MPI programs use the interconnect to utilize many compute nodes in parallel for large and complex calculations. By means of the interconnect, the MPI tasks convey data, states and intermediary results among each other.

Infiniband EDR: The storage system (Home, Projects, Scratch) is the core of the interconnect fabric, and manages the whole system based on EDR: „Enhanced Data Rate“ = 100 GBit/s.

Infiniband FDR-14

In phase II, the nodes have Mellanox ConnectX cards of the “FDR-14” standard.

Fourteen Data Rate 14 = 56 GBit/s

The former island structure of the LB1 in contrast to the modern, flat topology of the Lichtenberg II.

Details on the Lichtenberg-HPC can be found in our Hardware Overview.