The Lichtenberg HPC can only be used via , defining the approved amount of resources the project can allocate on the HPC. In other words, a project's allotted number of core*hours determines the “share” of the overall computing resources of the HPC for this project. projects
All core*hours used within the course of a project are accounted on that project (like money spent is accounted on a bank account).
User vs. Project
Do not share your user account (neither password nor ssh
keys)!
Collaboration is permitted only by being members in a common project.
Expiration
As projects can have several users/members, and a given user can be member of several projects, the validity terms of HPC user accounts and HPC projects are completely independent of each other. Both can expire (run out) at different dates, and extending one does not imply extending the other.
Data Sharing
For projects having several TU-IDs assigned, a shared project directory in
/work/projects
can be set up, initially having 5 TByte of quota.
As this is not done by default at activation of approved projects, please contact us, mentioning the project ID in the subject of your mail.
Jobs vs. Project
Submitting batch jobs is not possible without (implicitly) specifying a project (sbatch -A
parameter). If a user does not explicitly specify sbatch -A <projectname>
, the job will be allocated on that user's default project.
Rules of Accounting
The Lichtenberg cluster runs in “user-exclusive” mode: a given compute node will always execute only jobs of the same user at the same time.
This in turn means that even one single (small) job will block the assigned compute node for other users. Therefore, the accounting will book the equivalent of the full node's core*h (even if your job does not use all cores) on your project!
For small jobs (with no overly large memory footprint), we recommend to request even dividers of the amount of cores per node, so as to have these jobs share a given compute node without “clipping” waste of resources. In our case of compute nodes with 96 cores:
For this to work, strictly avoid the
#SBATCH --exlusive
pragma, as this would assign every (small) job its own, separate compute node!
Resources used
With the commands csum
and csreport
, any user can get a list of their current overall resource consumption.
Monthly Usage Report
At the end of a month, users get an automatic email with a usage overview on all projects they are associated with (“Lichtenberg User Report”).
Due to changes in the cluster's job accounting, the former graphs of your projects' usage and job efficiency are currently unavailable.