We have a computer whose CPU has 32 cores and it's going to be used for running programs by a few different users. Is there any way to restrict the number of cores each user can use at any time so that one user will not monopolize all the CPU power?
We have a computer whose CPU has 32 cores and it's going to be used for running programs by a few different users. Is there any way to restrict the number of cores each user can use at any time so that one user will not monopolize all the CPU power?
While this is possible, it is complicated and almost certainly a bad idea. If only one user is using the machine at the moment, restricting them to N cores is a waste of resources. A far better approach would be to run everything with
nice
:This is a great tool that sets the priority of a process. So if only one user is running something, they'll get as much CPU time as they need, but if someone else launches their own (also niced) job, they will be nice and share with each other. That way, if your users all launch commands with
nice 10 command
, nobody will be hogging resources (and nobody will bring the server to its knees).Note that a high nice value means a low priority. This is a measure of how nice we should be and the nicer we are, the more we share.
Also note that this will not help manage memory allocation, it only affectes CPU scheduling. So if multiple users launch multiple memory-intensive processes, you will still have a problem. If that's an issue, you should look into proper queuing systems such as torque.
TL;DR: From brief research it appears it is possible to restrict commands to specific number of cores, however in all cases you have to use a command which actually enforces the restriction.
cgroups
Linux has
cgroups
which is frequently used exactly for the purpose of restricting resources available to processes. From a very brief research, you can find an example in Arch Wiki with Matlab ( a scientific software ) configuration set in/etc/cgconfig.conf
:In order for such config to take effect, you have to run the process via
cgexec
command, e.g. from the same wiki page:taskset
A related question on Ask Ubuntu and How to limit a process to one CPU core in Linux? [duplicate] on Unix&Linux site show an example of using
taskset
to limit the CPUs for the process. In the first question, it's achieved through parsing all processes for a particular userIn the the other question, a process is started via
taskset
itself:Conclusion
While it is certainly possible to limit processes, it seems it's not so simple to achieve that for particular users. The example in linked Ask Ubuntu post would require consistent scanning for processes belonging to each user and using
taskset
on each new one. A far more reasonable approach would be to selectively run CPU intensive applications, either viacgexec
ortaskset
; it also makes no sense to restrict all processes to specific number of CPUS, especially for those that actually make use of parallelism and concurrency to run their tasks faster - limiting them to specific number of CPUs can have the effect of slowing down the processing. Additionally, as terdon's answer mentioned it's a waste of resourcesRunning select applications via
taskset
orcgexec
requires communicating with your users to let them know what applications they can run, or creating wrapper scripts which will launch select applications viatasksel
orcgexec
.Additionally, consider setting number of processes a user or group can spawn instead of setting limit on number of CPUs. This can be achieved via
/etc/security/limits.conf
file.See also