I have a ZFS mirrored pool with four total drives. Two of the drives are intended to be used for rotating offsite backups. My expectation was that after the initial resilvering I could detach
and later attach
a disk and have it only do an incremental resilver--however in testing it appears to perform a full resilver regardless of whether or not the disk being attached already contains nearly all of the pool contents.
Would using an offline
/ online
approach give me the desired result of only updating the disk--rather than fully rebuilding it? Or to have this work as expected will I need to do something entirely different--such as using each backup disk as a 1-disk pool and send
ing the newest snapshots to it whenever it needs to be brought up to date?
Don't go down the road of breaking the ZFS array to "rotate" disks offsite. As you've seen, the rebuild time is high and the resilvering process will read/verify the used size of the dataset.
If you have the ability, snapshots and sending data to a remote system is a clean, non-intrusive approach. I suppose you could go through the process of having a dedicated single-disk pool, copy to it, and zpool export/import... but it's not very elegant.
After further experimentation I've found a fair solution, however it comes with a significant trade-off. Disks which have been
offline
'd but not detached can later be brought back online with only an incremental resilvering operation ("When a device is brought online, any data that has been written to the pool is resynchronized with the newly available device."). In my tests this brings resilvering time for a 3-disk mirror down from 28 hours to a little over 30 minutes, with about 40GB of data-delta.The trade-off is that any pool with an offline disk will be flagged as degraded. Provided there are still at least two online disks (in a mirrored pool) this is effectively a warning--integrity and redundancy remain intact.
As others mentioned this overall approach is far from ideal--sending snapshots to a remote pool would be far more suitable, but in my case is not feasible.
To summarize, if you need to remove a disk from a pool and later add it back without requiring a full resilvering then the approach I'd recommend is:
zpool offline pool disk
hdparm -Y /dev/thedisk
zpool online pool disk
And, since this is as-yet untested, there is the risk that the delta resilvering operation is not accurate. The "live" pool and/or the offline disks may experience issues. I'll update if that happens to me, but for now will experiment with this approach.
Update on 2015 Oct 15: Today I discovered the
zpool split
command, which splits a new pool (with a new name) off of an existing pool.split
is much cleaner thanoffline
anddetach
, as both pools can then exist (and be scrubbed separately) on the same system. The new pool can also be cleanly (and properly)export[ed]
prior to being unplugged from the system.(My original post follows below.)
Warning! Various comments on this page imply that it is (or might be) possible to
zpool detach
a drive, and then somehow reattach the drive and access the data it contains.However, according to this thread (and my own experimentation)
zpool detach
removes the "pool information" from the detached drive. In other words, adetach
is like a quick reformatting of the drive. After adetach
lots of data may still be on the drive, but it will be practically impossible to remount the drive and view the data as a usable filesystem.Consequently, it appears to me that
detach
is more destructive thandestroy
, as I believezpool import
can recover destroyed pools!A
detach
is not aumount
, nor azpool export
, nor azpool offline
.In my experimentation, if I first
zpool offline
a device and thenzpool detach
the same device, the rest of the pool forgets the device ever existed. However, because the device itself wasoffline[d]
before it wasdetach[ed]
, the device itself is never notified of thedetach
. Therefore, the device itself still has its pool information, and can be moved to another system and thenimport[ed]
(in a degraded state).For added protection against
detach
you can even physically unplug the device after theoffline
command, yet prior to issuing thedetach
command.I hope to use this
offline
, thendetach
, thenimport
process to back up my pool. Like the original poster, I plan on using four drives, two in a constant mirror, and two for monthly, rotating, off-site (and off-line) backups. I will verify each backup by importing and scrubbing it on a separate system, prior to transporting it off-site. Unlike the original poster, I do not mind rewriting the entire backup drive every month. In fact, I prefer complete rewrites so as to have fresh bits.In the same machine, have you tried creating a new pool with the 2 drives in a mirror? Next, create a snapshot on your working pool then send that snapshot to the new pool, repeat, then the next snapshot send will be incremental. This is not the same with "sending data to a remote system" since this is a pool within the same system/server/machine. With this setup, you can still apply zpool split/offline/detach/attach but you only do it in the second (copy) pool and not on the source pool.