I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd
test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.
- The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.
- Performance is tested using
time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000
. - The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.
- The host partitions are 4kb aligned (and performance is fine on the host, anyway).
- Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.
- Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.
- Host has the deadline IO scheduler enabled and the guest has noop.
There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.
Update 1
And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).
Update 2
I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.
Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).
How to achieve top performance with QCOW2:
The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.
Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):
Some users reports that this configuration beats LVM/unsafe configuration in some tests.
For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.
I achieved great results for qcow2 image with this setting:
which disables guest caches and enables AIO (Asynchronous IO). Running your
dd
command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.My
qemu-kvm
version is1.0+noroms-0ubuntu14.8
and kernel3.2.0-41-generic
from stock Ubuntu 12.04.2 LTS.I experienced exactly the same issue. Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.
Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.
If you're running your vms with a single command, for arguments you can use
It got me from 3MB/s to 70MB/s
On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.
On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.
A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines. On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.