We have an Heartbeat/DRBD/Pacemaker/KVM/Qemu/libvirt cluster consisting of two nodes. Each node runs Ubuntu 12.04 64 Bit with the following packages/versions:
- Kernel 3.2.0-32-generic #51-Ubuntu SMP
- DRBD 8.3.11
- qemu-kvm 1.0+noroms-0ubuntu14.3
- libvirt 0.9.13
- pacemaker 1.1.7
- heartbeat 3.0.5
The virtual guests are running Ubuntu 10.04 64 Bit and Ubuntu 12.04 64 Bit. We use a libvirt feature to pass the capabilities of the host CPUs to the virtual guests in order to achieve best CPU performance.
Now here is a common setup on this cluster:
- VM "monitoring" has 4 vCPUs
- VM "monitoring" uses ide as disk interface (we are currently switchting to VirtIO for obvious reasons)
We recently ran some simple tests. I know they are not professional and do not reach high standards, but they already show a strong trend:
Node A is running VM "bla" Node B is running VM "monitoring"
When we rsync a file from VM "bla" to VM "monitoring" we achieve only 12 MB/s. When we perform a simple dd if=/dev/null of=/tmp/blubb inside the VM "monitoring" we achieve around 30 MB/s.
Then we added 4 more vCPUs to the VM "monitoring" and restartet it. The VM "monitoring" now has 8 vCPUs. We re-ran the tests with the following results: When we rsync a file from VM "bla" to VM "monitoring" we now achieve 36 MB/s. When we perform a simple dd if=/dev/null of=/tmp/blubb inside the VM "monitoring" we now achieve around 61 MB/s.
For me, this effect is quite surprising. How comes that apparently adding more virtual CPUs for this virtual guest automatically means more disk performance inside the VM?
I don't have an explanation for this and would really appreciate your input. I want to understand what causes this performance increase since I can reproduce this behaviour a 100%.
I will give very rough idea/explanation.
In OP situation, besides measuring within the VM, the host should be look at too.
In this case, we can assume the following are correct
"monitoring"
) I/O increases with more CPUs allocated to it. If host I/O was already max out, there should be no I/O performance gain."bla"
is not the limiting factor As"monitoring"
I/O performance improved without changes to"bla"
Additional factor
What happen when more cpu assigned to
"monitoring"
?When
"monitoring"
is allocated more CPUs, it gain more processing power, but it also gain more processing time for I/O.This has nothing to do with
rsync
as it is a single thread program.It is the I/O layer utilizing the increased CPU power, or more precisely, the increased processing time.
If cpu monitoring program (eg. top) is used on
"monitoring"
during test, it will show not one, but all cpu usage go up, and also %wa. %wa is wait time spend on I/O.This performance increase will only happen when your host I/O is not max. out.
I cannot find the cpu scheduling in KVM site, but there is this blog mentioning KVM is using CFS and cgroups, following is the quote
In a nutshell, more cpu = more cpu time = more I/O time slot in a given period of time.