I have three disks which used to hold an MD RAID5 array. I have since removed (or so I thought) this array and created partions for btrfs and swap space. On rebooting the machine, MD still binds the devices that used to hold the old array, causing the new filesystem to fail to mount.
It was suggested to me that the old superblocks of the raid arrays might be left behind causing MD to think it is a real array and thus binding the disks. The suggested solution was to use mdadm --zero-superblock to clear the superblock on the affected disks. However, I don't really know what this does with the disk. Since this disk holds partitions I don't really want to start zeroing parts of it blindly.
So what procedure should I follow to safely clear the MD superblocks without damaging the other partitions and file systems on the drives?
This question essentially asks the same thing, but there isn't a clear answer as to whether doing mdadm --zero-superblock on a repartitioned device is actually supposed to be safe: mdadm superblock hiding/shadowing partition
https://raid.wiki.kernel.org/index.php/RAID_superblock_formats
So, It's already too late and might be unsafe to use
--zero-superblock
, because we don't know is there any data or not - you must resize/shrink your current partition to-128K
from the end of the x-RAID partition, then, wipe that part and grow partition back.Other option 1: write large files to fill entire disk, it will overwrite RAID superblocks and it will not be recognizable by the mdadm.
Other option 2: similar to 1: https://unix.stackexchange.com/questions/44234/clear-unused-space-with-zeros-ext3-ext4
wipefs --all /dev/sd[4ppropr14t3][123]
(of course set up the glob for your drives/partitions!)This is how I figured this out (it might be quite specific to my case but I'll try to keep it general where I can).
(When I talk about devices, what I mean are the devices the raid volume is composed of, not the raid array itself)
I used
mdadm -E $DEVICE
to figure out which metadata format the array was using. I then went to raid.wiki.kernel.org to find some information about the superblock format. In my case this was version 0.90.This format has the superblock stored towards the end of the device. This is where my situation comes in. My old array was made directly on the drives, no partitioning. Because of this, I knew the superblock should be at the very end of the device. My new partitioning included a swap partition at the end. Therefore, there was not much data to lose where the superblock was located.
I did some reading around, the conclusion I reached was that
mdadm --zero-superblock
only zeroes out the superblock itself and thus it should be safe in my case. I went ahead and removed the superblocks on all three devices:Repeat this line as required
Some additional comments/speculation:
Generally, if the space is needed by the new partitioning/filesystems it should have been overwritten already. Thus, if the superblock still there, zeroing it shouldn't hurt the partitioning/filesystems. I am however not sure how MD handles the case where the superblock has already been overwritten on one or many of the devices but not all. The man page says that -f is needed to zero the superblock out if it is invalid, but keep it in mind.