I'm not sure why, but after I restarted my ec2 instance, /dev/md0 didn't start as it normally would. after I saw what's available in /dev/md*, instead of seeing /dev/md0, there is a device there named /dev/md127. I updated fstab to reflect the new device and was able to mount it successfully. Looking at /proc/mdstat, it is using the correct underlying ephemeral volumes that the RAID was originally created on:
[root@ip-10-0-1-21 ~]# cat /proc/mdstat
Personalities : [raid0]
md127 : active raid0 xvdc1[1] xvdb1[0]
870336512 blocks super 1.2 512k chunks
unused devices: <none>
Yet when I run a mdadm --detail --scan
a difference device name shows up:
[root@ip-10-0-1-21 ~]# mdadm --detail --scan
ARRAY /dev/md/ip-10-0-1-21:0 metadata=1.2 name=ip-10-0-1-21:0 UUID=543098de:1e9dc96e:4ce2444c:934bdfdf
Is it normal for the device name to change? Do I have to update /etc/fstab with the new device name? Is it critical that I re-run /etc/mdadm.conf with the new information? Is this device name /dev/md127 or dev/md/ip-10-0-1-21:0? I suppose I'm not sure what's going on here. Some insight would be great.
Software RAIDs have a value with a newer superblock have a volume naming scheme beyond just the /dev/mdN. They include a name component that is
homehost:volname
. This makes it easier to disconnect an array and re-attache it to another system without conflicts.If udev is setup properly there should be a device named
/dev/md/ip-10-0-1-21:0
, that is what you should be using in your/etc/fstab
for newer style arrays. This device is created for each array when they are running. The/dev/md127
entry is just providing a name for older tools and methods to also be able to use the array. You generally should not be using that name for your mount point, since those names are dynamically allocated during startup. If you add another array tomorrow the device named/dev/md127
, might be/dev/md126
instead.