because of my unanswered question : qemu snapshot exclude device i decided to use NFSv3 for the VM to handle user data. Because of slow performance of BTRFS after maintance-tasks i use now zfs Raid1 Version: buster-backports 0.8.3-1 on the Debian Host.
When I copy data on the host there is no performance problem.
BUT: the performance via NFS is exorbitant slow; in the beginning for both write and read with 10 and 40 MB/s. After some Tuning (i think it was NFS with async) i got the writes to ~80 MB/s. Thats enough for me. The reads stayed at 20 MB/s per device, yet.
Any ideas what to test? I'm new to zfs and NFS.
Host: Debian 10
VM: Debian 10
NFS:
Host: /exports/ordner 192.168.4.0/24(rw,no_subtree_check)
client: .....nfs local_lock=all,vers=3,rw,user,intr,retry=1,async,nodev,auto,nosuid,noexec,retrans=1,noatime,nodiratime
ZFS dataset:
Volume with:
....create -o ashift=12 zfs-pool ....mirror
sync=default
zfs set compression=off zfs-pool
zfs set xattr=sa zfs-pool
zfs set dnodesize=auto zfs-pool/vol
zfs set recordsize=1M zfs-pool/vol
zfs set atime=off zfs-pool/vol
zfs-mod-tune:
options zfs zfs_prefetch_disable=1
options zfs_vdev_async_read_max_active=1
options zfs_vdev_sync_read_max_active=128 (also 1 tested)
options zfs_vdev_sync_read_min_active=1
Can u give an advice?
You can get better performance if synchronous requests are disabled:
zfs set sync=disabled tank/nfs_share
.zfs
manpage:disabled
disables synchronous requests. File system transactions are only committed to stable storage periodically. This option will give the highest performance. However, it is very dangerous as ZFS would be ignoring the synchronous transaction demands of applications such as databases or NFS. Administrators should only use this option when the risks are understood.Bear in mind that disabling
sync
could lead to data corruption.Another option would be:
On your tests, I've observed that maximum number of active threads for asynchronous operations were set to 1. That is too low, hence could lead to bad read performance.
I need some details about your system (disk information for the ZFS pool, system memory and CPU).
Here is a suggestion that you could use and tune for your system. It works pretty well for my 12-core system (at
/etc/modprobe.d/zfs.conf
):[1] https://jrs-s.net/2019/05/02/zfs-sync-async-zil-slog/
Some Updates:
i test ganesha as nfs service --> the writes boost to 30 MB/s, but after retreat it stayed so i dont think this was the real cause for the boost
i changed my configuration from network traffic through the router to a construct where the communication between host and vm to take place only in the RAM and test it with iperf
--> to my surprise the bandwith dont grow over 60 MB/s for read and write although the CPU last was by 40%
--> wtf - how awful -- only in the RAM no more - i do not understand why - this is terrible slow
- i do not believe that this is the typical performance of a actual debian kvm/qemu system
Maybe somebody has an answer.
interessting: i could not messure the udp-performance by comparisson because i get only 1 MBit/s - nfs with udp is functional but not error free with big data
Status: the speed with 30MB/s write and 50MB/s read is enough for me, but disappointing