Access and User Regulations
General Scientific/Research Usage
For accessing and using the Lichtenberg cluster as a scientist, the following is necessary:
1) Project : A project must be set up to which the computing time used can be billed (booked) for. Please submit a for this purpose. Project Application
Usually the project application is submitted by the project manager. NHR Large and NHR Normal projects are scrutinized scientifically (according to the conditions of the NHR4CES Resource Allocation Board ( )). Small projects will just be checked technically for feasibility. RAB
2) User account: A user account must be activated which is assigned to a project, i.e. the project manager specifies who is assigned to the project. The user account is assigned to exactly one person and is managed via TU-ID. For this purpose, an application for using the high-performance computer must be submitted: Application form
Your user account (2) can be extended even without valid/running project, for you to have access to your research data even after end of a project (eg. to transfer and back up data from the file systems of the HPC to your own storage).
Lectures and workshops
For utilizing the Lichtenberg cluster in the context of lectures and courses, please find further details under “. Lectures and workshops
Recommendation: In order to have a smooth and efficient start when working with the Lichtenberg-HPC, we advise all new users to attend the ”Introduction to the Lichtenberg High Performance Computer" mentioned below.
News and Events
-
Introduction to the Lichtenberg High Performance Computer
2022/03/10
Attendance is free of charge
For the (potential) users of the Lichtenberg supercomputer, an introduction is being held every second Tuesday a month. Subjects are the available hardware and software and the general use of the (batch) system. This will take place in hybrid mode (presence and webinar).
-
Power Rail Short & Fire L5|08 [Update 2025-04-08]
2025/03/25
HPC & Housing are inoperative
Due to a short and subsequently, a fire in the power rail system, the TURZ-L5|08 data center is currently inop, affecting the Lichtenberg HPC, network and housing.
-
Preparations for Migration to RedHat EL 9
2025/02/24
Jumping up a major release of the cluster's operating system
Some login and compute nodes are already migrated to RHEL 9.4
-
Solved: Failure of the Cluster-wide File System
2024/11/04
System back at normal and available
+++ Update 17:00: The deadlock problem could only be fixed by (hard) reset of various GPFS master servers and a reboot of all compute nodes. Hence, all running jobs at the time of the GPFS lockup unfortunately are lost. If you did not explicitly prohibit it (by using special parameters), the scheduler will restart those jobs on its own. +++
-
New Defaults for OpenMP- and Hybrid Programs
2024/10/24
-
HPC and Housing in L5|08: Downtime
2024/09/30
for operations on the power infrastructure
For the final repair of the 2000A power rail, the whole HPC cluster will be down.
-
HPC down due to failure of the cooling system
2024/05/05
The malfunction has been fixed, and the HPC is back at normal operations.