I'm running a setup consisting of a Linux host OS and a Windows 7 guest (VMware Workstation). I'm trying to run 16 CPU-bound background jobs on the Linux host at nice values of 19 (the lowest possible priority; one for each virtual CPU) and simultaneously use the Windows VM as a normal desktop OS. For some reason the Linux background jobs make my Windows VM grind to a halt even though VMware's nice value is 0.
If it helps, I'm running an 8-core machine with hyperthreading, so 16 virtual CPUs. Since VMware Workstation only supports virtualizing 8 cores, only 8 of the cores are visible in the Windows guest.
Edit: The background jobs I'm running are almost purely CPU bound and perform virtually no I/O.
Edit # 2: It's not an issue with hyperthreading messing up scheduling. Disabling hyperthreading in the BIOS solves nothing.
You can check whether it is really CPU or rather I/O that slowes your system.
vmstat 1
might be a good idea, and maybetop
. The 19 processes are supposed to do something, right?Rememer that a desktop hdd can not take more than ~100 random I/Os per second. Nice'd processes should get less I/O, but so many of them will still get enough.
I realized what was going on here. The jobs I was running were dumping a bunch of output I didn't care about to /dev/null. Running other jobs in the background that really are purely CPU bound works fine.