Login nodes

Cluster access for users is exclusively via login nodes. You can find a current list of accessible login nodes here .

Interactive access is provided only by ssh to one of the login nodes.

Password: your (initial) login password and how to change it .

ED25519
(preferred, but other key types are also allowed – see below)
Public Key AAAAC3NzaC1lZDI1NTE5AAAAILTlBfX7g8HAbMy7x7vSS66HO6QtEItNByqtSkRUZauo
Fingerprint SHA256:sCfHqKOHUK45d7XaHyWN/N8cTUd8Nh6o6T1ngNhbQa8
Ciphers aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes256-cbc,aes128-gcm@openssh.com,aes128-ctr,aes128-cbc
MACs hmac-sha2-256-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha2-256,hmac-sha1,umac-128@openssh.com,hmac-sha2-512
Kex Algorithms curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
Host Key Algorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-256,rsa-sha2-256-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-512-cert-v01@openssh.com,ssh-rsa,ssh-rsa-cert-v01@openssh.com

Due to the login nodes facing the public (and sometimes evil) internet, we have to install (security) updates from time to time. This will happen on short notice (30 minutes). Thus, don't expect all login nodes to be available 24h/7d.

To use the cluster, it is not sufficient to simply start your program on a login node!

The login nodes are not for “productive” or long-running calculations!

Used by all users of the HPC, the login nodes are intended to be used only for

  • job preparation and submission
  • copying data in and out of the cluster (with scp, sftp or rsync via ssh)
  • short test runs of your program (≤ 30 minutes)
  • debugging your software
  • job status checking

While test-driving your software on a login node, check its current CPU load with “top” or “uptime”, and reduce your impact by using less cores/threads/processes or at least by using “nice”.

From a login node, your productive calculations need to be submitted as batch jobs into the queue (usually with “sbatch”). For that, you need to specify your required resources per job (e.g. amount of main memory, number of nodes (and tasks), maximum runtime).

Batch system Slurm

The arbitration, dispatching and processing of all user jobs on the cluster is organized with the Slurm batch system. Slurm calculates when and where a given job will be started, considering all jobs' resource requirements, workload of the system, waiting times of the job and the priority of the associated project.

When eligible to be run, the user jobs are dispatched to one (or more) compute nodes and started there by Slurm.

The batch system expects a batch script for each job (array), which contains

  • the resource requirements of the job in form of #SBATCH … pragmas and
  • the actual commands and programs you want to be run in your job.

The batch script is a plain text file (with UNIX line feeds !) and can either be created on your local PC and then be transferred to the login node. In Windows, use “Notepad++” and switch to UNIX (LF) in “Edit” – “Line feed format” before saving the script.

Or you can create it with UNIX editors on the login node itself and avoid the fuss with improper line feeds.

Further information: