Hy, I'm currently setting up a new Server using libvirt+kvm. Afterwards there should be about 5 virtual-machines on this server running (+some testing machines).
The storage is put on a raid-5 device which is set up using LVM. KVM now runs on some LVM-Logical Volumes.
The question is: Is there any drawback of using lvm again (a second time) inside the virtual machine to partition the space? So it would be: Harddisk-> Raid Controler -> LVM for Physical Server -> One Logical Volume per VM -> LVM inside each VM -> Several Logical Volumes inside each VM.
Are there any other possibilities if I want dynamic partitions inside my virtual machines?
Thanks
LVM's performance overhead is trivial, using it twice won't change that. Your raid-5 device is going to have a much much greater impact than lvm.
A blog article at http://hyperthese.net/post/kvmized-debian-on-lvm/ suggests that you create LVM logical volumes on the host (physical server) and create filesystems directly on them, without creating partitions, and before creating the virtual machines.
I tried it out, and long story short: seemed to work, and I was able to enlarge an LV and the filesystem on it.
Here's the long story, i.e. what I did:
Created LVM logical volumes for root, var and swap for the VM, on the host (running Ubuntu 10.04):
Created file systems and swap on LVs:
Created the VM:
Then I connected to the new VM using virt-viewer and the Ubuntu installer was there waiting. I chose the mode 'Install a minimal virtual machine' (F4 key).
At the partitioning phase I chose the manual partitioning. The installer found virtual disks vda, vdb and vdc and recognized the first two having ext3 and last as swap. I chose the ext3 partitions and told to use them as ext3 partitions (the default was "do not use"), with "no, keep the existing data" and mount point as / for the first one and /var for the second one. Swap was set up correctly by default. Then I chose to install grub on the first disk.
I got the VM up and running fine. Fdisk shows vda as having an empty partition table and vdb and vdc as not having a valid partition table. I don't know if having or not having a partitioning table is a problem, there's some discussion about it at https://unix.stackexchange.com/questions/5162/how-to-install-grub-to-a-whole-ext4-disk-without-partition-table .
Finally I tried resizing the var disk. First, on the host:
Then I rebooted the VM and resized the file system on the VM:
And it resized it fine.
I don't know if this is a good way to do it or not, but so far it seems to work. Any comments?
I think, using a second time LVM won't be good for performance but except a network filesystem, I can't think of another stable solution with dynamic partition (you can try zfs with fuse or btrfs but there are not production ready).
If you want to keep LVM inside VM, you can create a LV on the host for each partition on each VM.
In general, using more parts mean more things to break. I'd advise rethinking what you're doing so that LVM isn't needed twice. Perhaps use something like OpenVZ instead of KVM, OpenVZ has support for resizing virtual partitions quickly and on the fly.