Computing resources

Partitions/Queues and Time Limits

In most cases, you do not need to specify a slurm partition, because an advanced automatic job submission mechanism is implemented to simplify this process.

For special cases (e.g. lectures and courses) you may need to specify an account, a reservation or a partition in your job scripts. For this, you will get additional details separately.

Depending on the maximum runtime of a job (-t or --time), jobs are assigned to a suitable partition (short, deflt, long). Partitions for jobs with a longer runtime have less hardware resources assigned to them, so their queueing/pending time will likely be longer.

Job runtime requirement
(-t / --time=…)
Partition Name Nodes assigned
MPI Accelerators
≦ 30' deflt_short all + 4 exclusive
acc_short 8
> 30' ≦ 24h deflt all
acc
> 24h ≦ 7d long 329
acc_long 6
Jobs with a runtime longer than 7d are only possible after coordination with the HPC team and with the use of a special reservation.

Configuration of batch jobs for certain hardware

By default, jobs will be dispatched to any compute node(s) of the cluster, ie. to nodes of all phases (expansion stages) and types.

For special cases like programs requiring special hardware, you need to specify the corresponding resource requirements. The most common distinction is by CPU architecture and by accelerator type , but you can also specify a particular section, as listed in the following table.

All other resource requirements like projected runtime and memory consumption will be adequately attributed to suitable node types and sections.

Processor type
Resources Section Node range Details
avx512 MPI 3 mpsc MPI section, LB 2 phase I
NVD 3 gvqc, gaqc ACC section, LB 2 phase I
MEM 3 mpqc MEM section, LB 2 phase I
avx2 (or dgx) DGX 1 gaoc ACC section, LB 2 phase I, DGX A100
Accelerator type
(selected by “Generic Resources” instead of by “constraint/feature”)
GRes Accelerator type Node range Details
--gres=gpu Nvidia (all) gvqc, gaqc ACC section (all)
--gres=gpu:v100 Nvidia Volta 100 gvqc ACC section, LB 2 phase I
--gres=gpu:a100 Nvidia Ampere 100 gaqc ACC section, LB 2 phase I
Sections
Resources Section name Node range Details
mpi MPI mpsc MPI sections (all)
mem1536g MEM mpqc MEM section, LB 2 phase I

Resources alias “features” can be requested with the parameter -C (“constraint”).

Several resource requirements can be combined with either & (logical AND) or | (logical OR) – see examples down below.

However, GPU accelerators are no longer requested just by feature, but by GRes:

--gres=class:type:# accelerator specification, eg. GPUs
(if not specified, the defaults are: type=any and #=1)

  • --gres=gpu – requests 1 of any GPU accelerator cards
  • --gres=gpu:v100 – requests 1 NVidia “Volta 100” card
  • --gres=gpu:a100:3 – requests 3 NVidia “Ampere 100” cards

To have your job scripts (and programs) adapt automatically to the amount of (requested) GPUs, you can use the variable $SLURM_GPUS_ON_NODE wherever your programs expect the number of GPUs to use (ie. … --num-devices=$SLURM_GPUS_ON_NODE).

If you need more than one GPU node for distributed Machine/Deep Learning (eg. using “horovod”), the job needs to request several GPU nodes explicitly using -N #(# = 2-8). Consequently, the number of tasks requested with -n # needs to be equal or higher than the number of nodes.
Since “GRes” are per node, you should not exceed --gres=gpu:4, even when using several 4-GPU-nodes.

Examples

-C avx512
requests nodes with CPUs sporting “Advanced Vector Extensions (512 bit)”
-C "avx512&mem1536g"
requests nodes with AVX512 instruction set AND 1.5 TByte RAM
-C avx512
--gres=gpu:v100:2
requests nodes with CPU architecture “avx512” and 2 GPUs of type “Volta 100”