I read that ZFS was set to use all physical memory except 1GB on systems that have 4GB or more. Since I have virtual machines running my home-server that is running the ZFS filesystem, I wanted to manually reduce this so that my VMs have allocated space, and I have some breathing room for future KVM deployments.
I understood that setting the following line in the /etc/modprobe.d/zfs.conf
file would limit ZFS to just utilizing 4GB.
options zfs zfs_arc_max=4294967296
After a reboot, htop shows that a significant chunk of my RAM has been allocated, but there is lot's of unused space.
However, after a period of time, RAM utilization will increase until we start going into swap as shown below:
Htop of Home Server with tasks sorted by RES usage
Question
Did I configure something incorrectly, or is there something else that I also need to set to reduce ZFS's footprint? Perhaps it is not ZFS but there is something else eating my RAM that I do not know about?
Extra Info
The output of cat /proc/spl/kstat/zfs/arcstats
is as follows:
5 1 0x01 85 4080 4981548270 615775159747876
name type data
hits 4 46287364
misses 4 2610021
demand_data_hits 4 30804990
demand_data_misses 4 578061
demand_metadata_hits 4 9829556
demand_metadata_misses 4 357556
prefetch_data_hits 4 2489500
prefetch_data_misses 4 1569248
prefetch_metadata_hits 4 3163318
prefetch_metadata_misses 4 105156
mru_hits 4 12907488
mru_ghost_hits 4 114469
mfu_hits 4 27727068
mfu_ghost_hits 4 464039
deleted 4 2749215
recycle_miss 4 8133
mutex_miss 4 740
evict_skip 4 62122
evict_l2_cached 4 0
evict_l2_eligible 4 270710646272
evict_l2_ineligible 4 122732333056
hash_elements 4 268203
hash_elements_max 4 268941
hash_collisions 4 7490083
hash_chains 4 71651
hash_chain_max 4 9
p 4 1982394368
c 4 4294967296
c_min 4 4194304
c_max 4 4294967296
size 4 4294834528
hdr_size 4 86552992
data_size 4 3125542912
meta_size 4 526384640
other_size 4 556353984
anon_size 4 540672
anon_evict_data 4 0
anon_evict_metadata 4 0
mru_size 4 1985674752
mru_evict_data 4 1692532736
mru_evict_metadata 4 124579328
mru_ghost_size 4 2308680192
mru_ghost_evict_data 4 1841692672
mru_ghost_evict_metadata 4 466987520
mfu_size 4 1665712128
mfu_evict_data 4 1432485888
mfu_evict_metadata 4 56686592
mfu_ghost_size 4 1953543680
mfu_ghost_evict_data 4 1462370304
mfu_ghost_evict_metadata 4 491173376
l2_hits 4 0
l2_misses 4 0
l2_feeds 4 0
l2_rw_clash 4 0
l2_read_bytes 4 0
l2_write_bytes 4 0
l2_writes_sent 4 0
l2_writes_done 4 0
l2_writes_error 4 0
l2_writes_hdr_miss 4 0
l2_evict_lock_retry 4 0
l2_evict_reading 4 0
l2_free_on_write 4 0
l2_abort_lowmem 4 0
l2_cksum_bad 4 0
l2_io_error 4 0
l2_size 4 0
l2_asize 4 0
l2_hdr_size 4 0
l2_compress_successes 4 0
l2_compress_zeros 4 0
l2_compress_failures 4 0
memory_throttle_count 4 0
duplicate_buffers 4 0
duplicate_buffers_size 4 0
duplicate_reads 4 0
memory_direct_count 4 2561
memory_indirect_count 4 36032
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 0
arc_meta_used 4 1169291616
arc_meta_limit 4 3221225472
arc_meta_max 4 1490740400
Update
I have run the ps-mem tool to perform a breakdown of the memory being utilized by all applcations, and it comes to just 8.3 GB. That combined with the 4294834528
bytes (4GiB) that the ZFS ARC apparently has should only come to 12 GiB, but you can clearly see that I am exceeding that by roughly a further 3-4 GiB. Perhaps the ARC isn't properly releasing RAM or something?
Update 5th May 2015 - Temporary Workaround
Running the following command appears to clear the memory usage as demonstrated in this youtube video.
sync; echo 2 | sudo tee /proc/sys/vm/drop_caches
ZFS also uses a lot of SLAB space in the kernel. You can check how much SLAB it is using by either checking
/proc/meminfo
or by installingnmon
.More detailed information about slab usage can be found under
/proc/slabinfo
and/proc/spl/kmem/slab
It's worth reading this to understand more about the memory usage of ZFS.