I've got an Ubuntu (13.04) Desktop with ZFS support thanks to the PPA zfs-native/stable
.
Everything was working really well. I created a RAID-Z1 pool called inground
with the following command:
zpool create inground raidz1 sdb sdc sdd sde sdf
Later, after being unable to access the mount point I had created, I ran zpool status
and nearly fell off my chair when I saw 1 unavailable
and 2 corrupt
vdevs in the pool. After a few deep breaths, I noticed that when I'd recently rebooted the system, I had a USD thumb drive in one of the front ports of my tower. This caused all of the /dev/sd*
mappings to change, and everything made sense. I removed the USB drive, rebooted, and all was well.
My question is, how do I prevent this in the future? Is there a different, canonical, identifier I can use to refer to the physical drives when adding them as vdevs to the zpool?
Good news is you can change the vdev configuration scheme by exporting and re-importing your pool. (from ZFS on Linux docs)
You're not supposed to use /dev/sdX names for ZFS pools in cases where the SCSI device names can change. See the options under /dev/disk...
I usually use the
/dev/disk/by-id
entries for my Linux zpools...and...