Yesterday, one hdd of my LVM crashed (smart-error)
The Machine has the following setup:
- 4 hdds
- 1 Partition Raid10 (System, 4*50GB)
- 1 Partition normal (/boot, 200MB, sda)
- 1 Partition LVM2 (Data, 4*~850GB)
- Ubuntu 10.04 Server (headless)
Now my lvm won't mount anymore, Ubuntu asks me to skip or manually recover on bootup. When I press S the system starts, but without my lvm getting mounted.
Now my system partition does not seam to be affected (/proc/mdstat looks as usual) an /boot works fine, too.
What I will try now is to
- buy a new hdd
- Integrate the hdd in my lvm
- Try to remove the sda-part of the lvm (copy it over to the new sde, or whatever lvm wants)
- Do the raid stuff (I think I'll find out how to do that, otherwise I'll ask a separate question)
Now my problems:
- How can I remove sda from the lvm (remove meaning copy contents and mark partition as not in use so I can unplug the drive)?
- If I am not able to remove the partition normally, are there any tools to recover the files on this partition, so I could manually copy them to the "new" lvm?
Thank you for your help
EDIT:
separated solution from Question
thinking pvmove is the command-line you are looking for...details here: http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
It's not too clear to me whether your LVM is on top of RAID or not? If not then you're flat out of luck getting any data off of the LVM.
Computer works now. Here my detailed steps:
My sda (faulty drive) partitions looked like this:
sda1: /boot
sda2: raid10 member (system)
sda3: lvm member
now my sde partitions lokked the follow:
sde1: /boot
sde2: raid partition (not initialized)
sde3: lvm partition (initialized, lvm worked again)
I shutdown the computer, replaced the harddisk (so sde would be sda, removed old sda) and reboot
EVERYTHING worked! I did not even have to use a live cd to fix bootloader/other stuff, miraculously sda2 was recognised as raid10 member and was automatically initialized!