Login Nodes Currently Available
(as of Aug 2024)
Lichtenberg II Stage 1:
lcluster13.hrz.tu-darmstadt.de
+ GPU NVIDIA Tesla T4lcluster14.hrz.tu-darmstadt.de
lcluster15.hrz.tu-darmstadt.de
+ GPU NVIDIA Tesla T4lcluster16.hrz.tu-darmstadt.de
lcluster17.hrz.tu-darmstadt.de
+ GPU NVIDIA Tesla T4lcluster18.hrz.tu-darmstadt.de
lcluster19.hrz.tu-darmstadt.de
+ GPU NVIDIA Tesla T4lcluster20.hrz.tu-darmstadt.de
[reserved for testing]- 2x Xeon-AP, 96 CPU cores (AVX512)
- 768 GByte RAM, RedHat Enterprise Linux 8.10
Lichtenberg II Stage 2:
lcluster1.hrz.tu-darmstadt.de
lcluster2.hrz.tu-darmstadt.de
lcluster6.hrz.tu-darmstadt.de
- 2x Xeon-AP, 104 CPU cores (AVX512)
- 1024 GByte RAM, RedHat Enterprise Linux 8.10
lcluster7.hrz.tu-darmstadt.de
- 2x Xeon-AP, 104 CPU cores (AVX512)
- 1024 GByte RAM, RedHat Enterprise Linux 9.4
All login nodes above are part of the Lichtenberg High Performance Cluster.
If you cannot login with an otherwise valid HLR account, follow the . instructions in our FAQ
The current number of CPU cores and accelerators (GPUs) available and allocated (occupied) can always be listed with the csinfo
command.
Stage 2 of Lichtenberg I
Since 2021-05-31, the older Lichtenberg I (avx, avx2
) has been decommissioned.
Please make sure not to have
-C avx<2>
in your job scripts any longer, as such jobs would never start to run.
The only exception would be GPU jobs intended to be run on the
, as their CPUs are only DGX accelerator nodesavx2
capable.