Hardware
Currently available

Login Nodes Currently Available

(as of Aug 2024)

Lichtenberg II Stage 1:

  • lcluster13.hrz.tu-darmstadt.de + GPU NVIDIA Tesla T4
  • lcluster14.hrz.tu-darmstadt.de
  • lcluster15.hrz.tu-darmstadt.de + GPU NVIDIA Tesla T4
  • lcluster16.hrz.tu-darmstadt.de
  • lcluster17.hrz.tu-darmstadt.de + GPU NVIDIA Tesla T4
  • lcluster18.hrz.tu-darmstadt.de
  • lcluster19.hrz.tu-darmstadt.de + GPU NVIDIA Tesla T4
  • lcluster20.hrz.tu-darmstadt.de [reserved for testing]
    • 2x Xeon-AP, 96 CPU cores (AVX512)
    • 768 GByte RAM, RedHat Enterprise Linux 8.10

Lichtenberg II Stage 2:

  • lcluster1.hrz.tu-darmstadt.de
  • lcluster2.hrz.tu-darmstadt.de
  • lcluster6.hrz.tu-darmstadt.de
    • 2x Xeon-AP, 104 CPU cores (AVX512)
    • 1024 GByte RAM, RedHat Enterprise Linux 8.10
  • lcluster7.hrz.tu-darmstadt.de
    • 2x Xeon-AP, 104 CPU cores (AVX512)
    • 1024 GByte RAM, RedHat Enterprise Linux 9.4

All login nodes above are part of the Lichtenberg High Performance Cluster.

If you cannot login with an otherwise valid HLR account, follow the instructions in our FAQ .

The current number of CPU cores and accelerators (GPUs) available and allocated (occupied) can always be listed with the csinfo command.

Stage 1 of Lichtenberg II

The compute resources of Cluster Stage 1 of Lichtenberg II (avx512) are almost all available for regular use.

Stage 2 of Lichtenberg I

Since 2021-05-31, the older Lichtenberg I (avx, avx2) has been decommissioned.

Please make sure not to have

-C avx<2>

in your job scripts any longer, as such jobs would never start to run.

The only exception would be GPU jobs intended to be run on the DGX accelerator nodes , as their CPUs are only avx2 capable.