We have a volume serving up CIFS data, which seems to have a lot more snapshot data than the rest. I suspect that this is due to a higher rate of change, and that I could determine this using the snap delta
command. But I'd also like to be able to look at the snapshot sizes and target a particular one based on its size too.
In the CLI and System Manager, when viewing the snapshots repeatedly over the space of a few minutes, the size gradually increases up to a point and then drops right back down again. By this I don't mean that the snapshots are increasing and decreasing in size, just the reported size. Ideally, I'd like to know what causes this to happen.
More importantly, though, how would I determine the actual size of a snapshot?
It's actually quite hard to say definitively because of how a snap works' A snapshot isn't any data in it's own right, it's just a copy of the inode table. Blocks referenced by this inode table have an increased reference count. Blocks are only freed when their reference count drops to zero.
This is - essentially - how deduplication works too. Pointers are redirected to the duplicate block, and it's reference count is increased. The 'old' block has its ref count decreased, and therefore it may become a candidate for releasing. (This will be after any snapshots that reference it have expired).
These 'freed up' blocks aren't actually reused immediately though - the way WAFL works is that an incoming write (usually!) goes to a new block entirely anyway, with 'free' blocks being cleared up as a background process.
This is why it's actually pretty hard to tell how large a snapshot is - because you essentially need to inspect each block within it, to see if that particular block is unique to that particular snap.
snap delta
andsnap list
are reasonably good approximations of this, but because of inter-snap dependencies a perfect answer is really hard to give.