I'm having a unique mdadm issue.
I have an 8 disk raid5. (storage, not boot disk).
I sold the computer, so I put a fresh install of ubuntu on it.
In that time, one of the drives died. So there are 8 drives, only 7 working.
For some reason, because of this, the computer hangs at boot and waits for it. I've never seen anything like this before!
I can drop to root recovery mode.
But because this new install never saw the dead drive, it's listed as a drive # in mdadm, but it has no "/dev/sdX" associate with it.
So there was no way for me to fail the disk.
In recovery mode I can stop the array. but I can't remove it after it's stopped.
Since we didn't care to save the data on the raid, I zero'd all the superblocks. That didn't even seem to work.
Any ideas?
In your
/etc/fstab
file, column 6 "pass" is described:If this column is anything other than 0, then the boot will stop until that disk (array) is recovered. This also frequently stops when the array is degraded.
I'd suggest changing that to 0 and then try a manual recovery on your array.
As an aside, I'd also like to recommend waiting for a raid array to be fully synced (which can sometimes take hours) before actually doing the installation.
Basically, during installation, after configuring the RAID array disks, exit the installation process and go to a command prompt, then check your RAID is fully synced before continuing. This avoids the RAID being in an inconsistent state when the system reboots at the end of the installation.