I am planning to build a multi-OS workstation overseen by (probably) KVM, on which I will do a variety of tasks. Some of these tasks lend themselves to multithreading better than others, so I want to maximize clock speed as much as possible. To this end, I am considering the pros and cons of a dual socket setup so that I can get more clock speed with the same number of cores. However, it is my understanding that the usefulness of dual socket builds is limited by slow communication between the CPUs. So my thought is that if I allot resources intelligently, dual socket might work well, but if not it could be a disaster.
So here are a few things that I'd like to understand:
If the host OS is exclusively using one socket and the actively used guest exclusively using the other socket, how much will those two sockets need to communicate?
How much does the hypervisor benefit from having access to more cores?
How smart is KVM (or other hypervisors) in terms of alloting resources between CPU sockets vs CPU cores? Are there some things I should set manually and others I should let be decided by the hypervisor?
An important consideration is that at any given time, only one or at most two VMs will be needing lots of resources, the other two or three should be pretty light at all times.
In the documentation for RHEL 7 you can bind VCPUs to their physical CPU counterpart using
numatune
, then pin a guest to a certain NUMA NODE withlscpu
andvirsh cpupin
. Uselstopo
to vizualize your NUMA NODE topology when usingnumatune
.