One of the HDDs in my server's RAID config failed, so I took it out of the array and had the data center hot-swap it. They've done that, but now the new drive is /dev/sdc
rather than /dev/sda
. I suspect that if I reboot the server, it will be /dev/sda
again, so I'm hesitant to add it back to the array as /dev/sdc
because I don't want to lay a trap for myself to fall into on the next reboot. I'd just as soon not reboot the server if I don't need to (if I do need to, well, too bad for me).
If I add it as /dev/sdc
, will there be a problem on reboot? Or is there some way to change the device name from /dev/sdc
to /dev/sda
without rebooting?
This is on Ubuntu 10.04 LTS. It's an md
array ("Linux Software RAID"), where currently one of the devices (there are a couple of them) looks like this ("degraded" because I've removed the old /dev/sda
from it):
# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Oct 11 21:07:54 2009 Raid Level : raid1 Array Size : 97536 (95.27 MiB 99.88 MB) Used Dev Size : 97536 (95.27 MiB 99.88 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 30 09:31:16 2011 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 496be7a5:ab9177ed:7792c71e:7dc17aa4 Events : 0.112 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed
It's fine to go ahead and add it as
/dev/sdc
. Reading through the kernelmd
documentation, if the name changes on reboot, it doesn't matter. (Good design, that.) Here's why:Although I didn't have
md
compiled into the kernel, my setup does the same thing as the above because it's auto-loadingmdadm
and themdadm.conf
is set up to scan all partitions for a superblock just like the kernel would:So it's fine to rebuild the array with
/dev/sdc
; the name probably will change to/dev/sda
on reboot, but that won't cause any trouble ifmd
is set up as above.