When experiencing write I/O, the log column in zpool iostat -v
does not show any ZIL activity, ever. This results in higher than expected wait times when writing data to disk (sometimes over 80ms during contention).
capacity operations bandwidth
pool alloc free read write read write
---------------- ----- ----- ----- ----- ----- -----
storage 1.88T 2.09T 3 3.01K 512K 39.3M
mirror 961G 1.05T 0 1.97K 128K 20.8M
mpathf - - 0 393 0 20.8M
mpathg - - 0 391 128K 20.6M
mirror 961G 1.05T 2 1.04K 384K 18.5M
mpathi - - 1 379 256K 21.1M
mpathj - - 0 281 128K 18.3M
logs - - - - - -
/zlog/zilcache 0 15.9G 0 0 0 0
cache - - - - - -
mpathk 232G 8M 1 0 130K 0
mpathl 232G 8M 1 0 130K 0
---------------- ----- ----- ----- ----- ----- -----
My /zlog/zilcache device never has any IO. It is a file on very fast flash. I can write and read it when I remove it from the ZFS store, but ZFS seems to ignore it.
The device looks available:
pool: storage
state: ONLINE
scan: scrub repaired 0 in 19h31m with 0 errors on Wed Nov 19 07:39:03 2014
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
mpathf ONLINE 0 0 0
mpathg ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
mpathi ONLINE 0 0 0
mpathj ONLINE 0 0 0
logs
/zlog/zilcache ONLINE 0 0 0
cache
mpathk ONLINE 0 0 0
mpathl ONLINE 0 0 0
errors: No known data errors
Any way to configure ZFS to cache writes to the logs device for faster acknowledgements?
Thanks
I believe you are misunderstanding the ZIL purpose. You describe it as a write cache which it is not. No activity on the ZIL might just be a normal behavior depending on what is running on your machine.
Nothing is ever read from the ZIL, this is a write only device. The only exception would possibly occur during a pool import after a crash.
There are only writes to it if applications are performing synchronous writes. Regular I/Os like moving files around are not using the ZIL.
You can set
sync=always
on the dataset to force all writes to behave as if they were synchronous.