I'm a relative late comer to the virtualistion party, so you'll have to forgive me if this seems like an obvious question.
If I have a server with 12 cores available, does each KVM guest have access to all 12 cores? I understand KVM makes use of the Linux scheduler, but that's where my understanding of "what happens next" ends.
My reason for asking is, the 10 or so distinct tasks we are intending to run in KVM guests (for purposes of isolation to facilitate upgrades) won't utilise a single core 100% of the time, so on that basis it seems wasteful to have to allocate 1 virtual CPU to each guest - we'll be out of cores from the get-go with a "full", idle server to show for it.
Put another way, assuming my description above, does 1 virtual CPU actually equate to 12 physical cores in terms of processing power? Or is that not how it works?
Many thanks
Steve
A virtual CPU equates to 1 physical core, but when your VM attempts to process something, it can potentially run on any of the cores that happen to be available at that moment. The scheduler handles this, and the VM is not aware of it. You can assign multiple vCPUs to a VM which allows it to run concurrently across several cores.
Cores are shared between all VMs as needed, so you could have a 4-core system, and 10 VMs running on it with 2 vCPUs assigned to each. VMs share all the cores in your system quite efficiently as determined by the scheduler. This is one of the main benefits of virtualization - making the most use of under-subscribed resources to power multiple OS instances.
If your VMs are so busy that they have to contend for CPU time, the outcome is that VMs may have to wait for CPU time. Again, this is transparent to the VM and handled by the scheduler.
I'm not familiar with KVM but all of the above is generic behavior for most virtualization systems.
a virtual CPU is a thread in the qemu-kvm process. qemu-kvm is of course multithreaded.
unless you pin processes to specific CPUs, the system scheduler will allocate the threads CPU time from the available cores, meaning, any vCPU can end up getting CPU cycles from any physical core, unless specifically pinned to specific core(s)
I will recommend to check on which vCPU assign to you VM they give best performance to you. Also conform your workload i.e., CPU bound or I/O bound then there are lot of free benchmarking tool available to check your performance such is Apache bechmark for IO bound operation and John the Ripper for CPU bound operation. Once you conform on which vCPU assignment you VM give best performance then you can distribute or configure other VM. Besause there are negative performance impact of we did't take care vCPU.
For, example if you have a host having 4 core and you want to run one VM and assign one vCPU to it, its performance will be the lowest because they will used only one core of the 4 cores and the rest 3 core will be not utilize at the same time. Because the Virtual Machine Monitor (VMM) or hypervisor is not aware the inside load of the VM. Your will get the maximum throughput if you assign 4 vcpu, in this case all the core of the host will be used at the same time if your workload are multi threaded. Again if you increase one more core i.e over allocation of vCPU more then 4 available then you can get performance degradation. It only good you you have more virtual VM and you have worries if one of the VM will be migrate to other server or turnoff then still all the core will be utilize otherwise it will be wast if not configure overallocation.