I'm replacing some failing disks with new ones, and rather than replace them one by one I have created a new ZFS pool on the new disks, and copied the old pool over to the new one. I did this at the pool level so all the volumes within would be copied.
# zfs snapshot -r oldpool@moving
# zfs send -R oldpool@moving | zfs recv -F -v newpool
This worked, and I now have the @moving
snapshot on the new pool:
# zfs list -t snapshots
oldpool@moving
oldpool/vol1@moving
oldpool/vol2@moving
newpool@moving
newpool/vol1@moving
newpool/vol2@moving
Now I can already see all the files in the newpool
mount point, and even modify them, even though they are from a snapshot (everything I read said snapshots are read-only and must be cloned to become writable, so I'm not sure whether zfs recv
already created clones of the snapshot or what).
So I am wondering how to discard the newpool@moving
snapshot and promote this state to be the new pool (i.e. the base state with no snapshots at all).
I tried cloning the snapshot however this just appeared to move the snapshot to a different path:
# zfs clone newpool@moving newpool/clone
# zfs promote newpool/clone
# zfs destroy newpool@moving
could not find any snapshots to destroy; check snapshot names.
# zfs list -t snapshots
oldpool@moving
oldpool/vol1@moving
oldpool/vol2@moving
newpool/clone@moving # was newpool@moving
newpool/vol1@moving
newpool/vol2@moving
It seems to have renamed the top-level volume and snapshot (but not any of the child volumes within it), and kept it as a snapshot instead of turning it into a "base volume with no snapshot".
Before I make anything worse, what am I missing? How can I make the snapshot on newpool
go away by merging everything into the base volumes, so that it reflects the state of oldpool
as it was before I took the snapshot in order to replicate it onto newpool
?
I think I worked this one out. The original situation (before I messed up with
zfs clone
) was that the filesystem was copied and a matching snapshot was created to assist with future incremental updates (withzfs send -i
). It was as if the files had been copied and then a snapshot created, meaning any subsequent changes went to the underlying filesystem, and I could roll those changes back to the snapshot if I wished (just like the source pool I'd created the original snapshot in).Since I didn't need this, all I had to do was just delete the snapshots:
And the two pools were now identical.
However in this case, first I had to fix the mess I'd created with
zfs promote
. This had promoted the dataset root to be a clone, effectively changing this structure:Into this structure:
The solution was to undo this promotion of the clone, which I managed to do like this:
Once that was done everything was close to back the way it was just after the
zfs recv
parameter, I just had to delete the clone as normal (zfs destroy newpool/clone
) and the extra@s1
snapshot.