performance of my setup is quite good (geekbench, how it 'feels', ...) even disc throughput (libvirt on raw lvm-partition) is very close to the raw performance on the server) but IOP/s are as low as 100-200 guestside (compared to ~1000 hostside) on both linux and windows-guests.
Is this a thing to live with (kvm just can't do it better) or am i doing something completely wrong ?
The interesting thing is that i was able to impact the throughput by changing the setup (qcow2 vs raw-image vs raw-partition) or the configuration (caching or io-scheduling) and variations but IOPs stayed at the same low point over all those combinations.
hardware#
• supermicro dual xeon E5520 with 24GB RAM
• 2x seagate constellation 1TB (RAID1 on Adaptec 3405)
• 2x seagate cheetah (RAID1 on Adaptec 6405).
software
• ubuntu 11.10 3.0.0-13-server
• kvm/QEMU emulator version 0.14.1 (qemu-kvm-0.14.1)
• benchmarking the disks (bonnie++, hdparm) from the hosts and guests (bonnie++, hdparm, hdtune on windows)
config
i tested several disc-configurations, the current setup is:
linux hosts
(They just don't "need" hight IO-Performance so i keep the more comfortable discfiles)
• qcow2 disc files on lvm on my constellations
• qemu/ide
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/media/vm/images/mex/mex_root.qcow2'/>
<target dev='hda' bus='ide'/>
<address type='drive' controller='0' bus='0' unit='0'/>
</disk>
windows hosts###
(Running SQL-Server and Remote-Desktop-Services, so here I definitely need a good IO-Performance)
• raw lvm partitions on my cheetahs
• virtio
<emulator>/usr/bin/kvm</emulator>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/Cheetah/mts'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
The optimal configuration is (usually) as follows:
elevator=deadline
elevator=noop
noatime,nodiratime
in fstab wherever possibleTry setting "deadline" as the I/O scheduler for your host's disks before starting KVM:
If you have I/O bound load, it might be your best choice as this IBM paper suggests.