I'm running slurm 21.08.5 installed through apt.
My cluster has 4 gpu machines (nd-gpu[001-005]) with 8 gpus each. I can run jobs as
srun --gres=gpu:8 nvidia-smi -L
And I see my gpus. I can also schedule real jobs with any from 0 to 8 gpus. However, the scheduling of resources isn't working properly. If I run:
srun --gres=gpu:1 sleep 1000
The entire node will be allocated, and I won't be able to use the remaining 7 gpus on that node.
What follows is the gres.conf:
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia0
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia1
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia2
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia3
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia4
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia5
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia6
NodeName=nd-gpu001 Name=gpu File=/dev/nvidia7
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia0
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia1
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia2
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia3
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia4
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia5
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia6
NodeName=nd-gpu002 Name=gpu File=/dev/nvidia7
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia0
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia1
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia2
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia3
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia4
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia5
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia6
NodeName=nd-gpu003 Name=gpu File=/dev/nvidia7
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia0
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia1
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia2
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia3
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia4
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia5
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia6
NodeName=nd-gpu004 Name=gpu File=/dev/nvidia7
the slurm.conf:
# See the slurm.conf man page for more information.
#
ClusterName=cluster
SlurmctldHost=nd-cpu01
SlurmctldHost=nd-cpu02
#
#GresTypes=
#GroupUpdateForce=0
ProctrackType=proctrack/cgroup
ReturnToService=1
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
StateSaveLocation=/home/slurm/slurmctd
TaskPlugin=task/affinity,task/cgroup
# TIMERS
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
# SCHEDULING
SchedulerType=sched/backfill
SelectType=select/cons_tres
# LOGGING AND ACCOUNTING
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
SlurmctldDebug=info
SlurmctldLogFile=/var/log/slurmctld.log
SlurmdDebug=info
SlurmdLogFile=/var/log/slurmd.log
# COMPUTE NODES
GresTypes=gpu
NodeName=nd-gpu[001-004] Sockets=2 CoresPerSocket=56 ThreadsPerCore=1 State=UNKNOWN Gres=gpu:8
PartitionName=debug Nodes=ALL Default=YES MaxTime=INFINITE State=UP
and the cgroups.conf:
ConstrainDevices=yes
I ran into the same problem when I set up my cluster.
You already have
SelectType=select/cons_tres
, which is good.Additionally, you need:
and you need to set
OverSubscribe
on the partition.OverSubscribe
defaults toNO
, setting it toYES
will allow Jobs to oversubscribe, but it has to be specified as an argument with each job. Setting it toFORCE
will enable it for all jobs, which is usually what you want.(technically you only need
OverSubscribe
, but it has no effect untilSelectTypeParameters
is also set)The resolution for me was
Seems my local setup wasn't really able to assess how much memory each core could be allocated. Instead, the number of CPUs came in handy with the gres.conf and the nodes specification.