Login nodes

Cluster access for users is exclusively via login nodes. You can find a current list of accessible login nodes here .

Interactive access is provided only by ssh to one of the login nodes.

Password: your (initial) login password and how to change it .

ED25519 (preferred) Public Key AAAAC3NzaC1lZDI1NTE5AAAAILTlBfX7g8HAbMy7x7vSS66HO6QtEItNByqtSkRUZauo
Fingerprint SHA256:sCfHqKOHUK45d7XaHyWN/N8cTUd8Nh6o6T1ngNhbQa8
RSA Public Key AAAAB3NzaC1yc2EAAAADAQABAAABAQDQSjjB2sizT3IlK0lxy0kxZiqcHOiyrnPinD3isaz4XOtsnR79Co123xiBOZEtcIMHBi8HIpuYLd4pCQiEtSRU0cUZoCLD26gZfakJJixHSjg1/FgXkgvSNDagAyt+edwXxzR3dTk2rmKPtnUezgdqi+nNmEqrOe+7GCHZEJyTuuc4Z3pUjd/rDgbJewSDQlPFolKN+cGxlfI7/y7BuA+WdWLYn6Q4dqa9zjdyTsSMwcFyNMwtvrNU/fOKrkqVTPV1bdTdDrApAzUyZC1ppbLWlvTPbMiHbMMQlnQ2UFJEfLja9CIGkFV4HSCUtt3eN+THx8gw3P9gxd0vGhhiIown
Fingerprint SHA256:7DwyCfkMxPWNMlm0rwiiTC9GwC6VD83+q7EKROB3Rl0
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,umac-128-etm@openssh.com
Kex Algorithms curve25519-sha256,curve25519-sha256@libssh.org,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512
Host Key Algorithms ssh-ed25519,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-256,rsa-sha2-512,ssh-rsa-cert-v01@openssh.com

Due to the login nodes facing the public (and sometimes evil) internet, we have to install (security) updates from time to time. This will happen on short notice (30 minutes). Thus, don't expect all login nodes to be available 24h/7d.

To use the cluster, it is not sufficient to simply start your program on a login node!

The login nodes are not for “productive” or long-running calculations!

Used by all users of the HPC, the login nodes are intended to be used only for

  • job preparation and submission
  • scp'ing data in and out of the cluster
  • short test runs of your program (≤ 30 minutes)
  • debugging your software
  • job status checking

While test-driving your software on a login node, check its current CPU load with “top” or “uptime”, and reduce your impact by using less cores/threads/processes or at least by using “nice”.

From a login node, your productive calculations need to be submitted as batch jobs into the queue (usually with “sbatch”). For that, you need to specify your required resources per job (e.g. amount of main memory, number of nodes (and tasks), maximum runtime).

Batch system Slurm

The arbitration, dispatching and processing of all user jobs on the cluster is organized with the Slurm batch system. Slurm calculates when and where a given job will be started, considering all jobs' resource requirements, workload of the system, waiting times of the job and the priority of the associated project.

When eligible to be run, the user jobs are dispatched to one (or more) compute nodes and started there by Slurm.

The batch system expects a batch script for each job (array), which contains

  • the resource requirements of the job in form of #SBATCH … pragmas and
  • the actual commands and programs you want to be run in your job.

The batch script is a plain text file (with UNIX line feeds !) and can either be created on your local PC and then be transferred to the login node. In Windows, use “Notepad++” and switch to UNIX (LF) in “Edit” – “Line feed format” before saving the script.

Or you can create it with UNIX editors on the login node itself and avoid the fuss with improper line feeds.

Further information: