we have a two-node cluster using DRBD 8.2 on CentOS 5.2 64bit. The cluster runs a few VMs on top of Xen 3.2.1, here's the configuration for an Ubuntu Jaunty VM:
name = 'dev'
bootloader = '/usr/bin/pygrub'
memory = '512'
vif = [ 'ip=192.168.1.217,mac=00:16:3E:CD:60:80' ]
disk = [ 'phy:/dev/drbd24,xvda1,w',
'phy:/dev/drbd25,xvda2,w' ]
As you can see, the disks are specified like "phy:", and as such pygrub doesn't know a thing about the underlying drbd device...
So my problem is that even though the VM boots just fine, it doesn't handle the state of the drbd device. As a result, when for some reason the device gets to a secondary/secondary state, the VM won't boot, and I have to manually specify which node is primary.
I read that starting with Xen 3.3 pygrub understands the "drbd:" specification, and I think that it would fix my problem, but I can't upgrade Xen at the moment... Is there a workaround? For example, could I use the 3.3 version of pygrub?
Thanks!
why don't you make sure your drbd is primary before starting your DomU?
I guess you are using heartbeat as cluster-software?
So there should be a resource-type "drbddisk" in /etc/ha.d/resource.d/
Use that resource before your xen-resource and startup will work great.
That`s what I did with SLES 10 up to SP2 (using the old drbd 0.7). With the switch to SLES 10 SP3 I moved to my own build of drbd 8.3.5 - I think that contained the integration with xen so that the drbd-disk-type was possible from then on.
BTW - you don`t have to use pygrub either. I simulated the way SuSE did a generic linking in /boot by creating a symlink to the newest initrd/vmlinz with a generic name in my CentOS-DomUs. That can by used in the Dom0-Config the traditional way.
Kind regards
Nils
As a workaround I'm using the drbd directive
become-primary-on
.