Situation
- xen 4.0.1 dom0 (debian squeeze)
- domUs all with LVs as disks:
disk = [ 'phy:/dev/vg-00/domu-swap,xvda1,w', 'phy:/dev/vg-00/domu-disk,xvda2,w' ]
) - one VG (
vg-00
) with 2 PVs
Goal
- Move all LVs from one PV to the other (pvmove) and remove the "empty" PV (vgreduce)
- Not disturbing any running machine (domU od Dom0)
Problem(s)
When i start pmove
(i even tried to ionice -c3
it) my domUs get very high loads or even get stuck. I think this happens, when pvmove
is moving the extends from the domUs LV from one PV to the other. I also saw the domU really freaking out an fireing the OOM-killer. Long story short: i had to interupt the procedure (pvmove --abort
), because my domU(s) started to get unusable, major server components were killed or they even died/froze completely.
Questions
I'm aware of higher IO loads while the transition and can cope with this. But even when
ionice -c3
d the IO load is so high, that inside the domU tasks get blocked. Why isn't theionice
working here? If i understand this correct, all IO is done by the dom0 (by the blkback driver) so the dom0 should see all IO done by every dom(0|U) and should be able toschedule IO for myrenice
process - are my assumptions wrong here?Why does my domU fire up the OOM-killer? How can this process affect the domU's memory? BTW: when the domUs go crazy my dom0 works fine. High IO, but that's obvious.
Is there any way to remove one PV without the hassel above? Would it be better to shutdown/pause one domU after the other and pvmove only LVs from this machine?
Thank you in advance for every input - i even would be glad to get some "debugging ideas"!
It should work - but it seems that XEN imposes some kind of exclusive locking in "w" mode. Perhaps that locking is not that strict in "w!" (or was is "!w"?) mode. That mode should allow write requests from more than one source.
Which memory cosumption goes up before the killer kicks in in the DomU? Buffer memory?