I'm trying to setup ocfs2 on Ubuntu Oneiric Server (3.0 kernel). I'm sharing a LV from a VG on the host. The HostOS is Ubuntu Lucid (also on a 3.0 kernel).
I can share the ocfs2 partition on the volume fine between two KVM's. I can't share the partition between the hostOS and a VM though.
I can mount the partition fine on the hostOS only, but once I try to mount the partition on one of the KVM's I get:
(o2hb-A72309E287,1395,1):o2hb_check_last_timestamp:576 ERROR: Another node is heartbeating on device (dm-4): expected(2:0xb88208e59655bc4f, 0x4f2d4275), ondisk(0:0x0, 0x0)
[22085.518632] ocfs2: Unmounting device (252,4) on (node 2)
in syslog.
The volumes are defined in the VM xml files as:
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/datastore/test'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
And the storage pool is defined as:
<pool type="logical">
<name>datastore</name>
<target>
<path>/dev/datastore</path>
</target>
</pool>
I created the device node for the partition on this LV with:
kpartx -av /dev/datastore/test
Which created /dev/mapper/datastore-test1, which I then try to mount.
Is it principally impossible to share a ocfs2 volume between a KVM VM and the hostOS or am I doing something wrong?
I'll answer my own question in case somebody comes here with the same problem:
Al my ocfs2 and VM config was OK. The problem was that I made a ocfs2 volume straight away in /dev/vda (in the VM).
The problem then is that the VM sees a physical partition with a ocfs2 partition, but the hostOS sees a LVM volume with ocfs2 partition.
The solution is to make a pv out of /dev/vda in the VM and then make a VG and and a LV on top of that. Then format the LV as ocfs2.
This nested VG is seen in the hostOS and can be mounted.
So the VG layout is :
on both the hostOS and the VM the volume can be mounted as /dev/vmtest/vmvolume