On my local file server I have raid-6 on 7x HDD drives.
dd if=/dev/zero of=tempfile bs=1M count=2048 conv=fdatasync
Local speed test gives me 349 MB/s write speed.
Remote writes to Samba from SSD (>2Gb/s read speed) gives me 259 MB/s writes. But remote writes to iSCSI drive (on Win10 iSCSI initiator) gives me mere 151 Mb/s writes.
raid6 config - 128K chunk size, stripe_cache_size = 8191. Write intent bitmap is on SSD (Samsung 860 PRO, 4096K bitmap chunk).
Array mounted with options: rw,noatime,nobarrier,commit=999,stripe=128,data=writeback
open-iscsi setup: target is based on a 4Tb file.
Any hints why iSCSI is slower than Samba on writes? Any hints on how to improve iSCSI writes speed?
I assume it has something to do with the desire of open-iscsi to flush writes to disk after each operation, which increases write amplification on raid6 due to excessive parity rewrites. But I am not sure how to fix it. Speed it more important than safety of currently written data in case of power outage.
As a side note older ietd iSCSI target had the ability to enable write-back mode (using IOMode=wb
) and the sustained write speed was much faster. Unfortunately it seems to be currently unmaintained.
First of all, the RAID-6 is the problem because of the double parity calculation. Secondly, you can connect iSCSI target twice in MS iSCSI Initiator, enable RR or Least Queue Depth (unfortunately, Win10 doesn't support multipathing, so you can test it with Windows Server instead).
In fact, block level access must be faster than file level access. What kind of benchmarking tool you are using from the Windows site? I would recommend using diskspd or FIO. Additionally, you can use something like Starwind as a much faster iSCSI target.
https://www.starwindsoftware.com/starwind-virtual-san#Hyper-V
iSCSI should be used on block level, your setup description sounds like you are using a file-system, placing a file on it and then running this file as iSCSI block layer.
This is far from being ideal, and definiteley not a setup for comparing speeds. Try using lvm on top of the raid6 for segmenting the space and staying on block layer for iSCSI, or use raid6 directly as iSCSI device.
In your current setup, data is transferred through the network, hitting a file in the filesystem, which is (most likely) not optimized for this type of workload, and as well shared with other processes. It is possible to do such setup with iSCSI, but should be considered as unoptimized fallback solution.
Please be aware
dd
is a very simple benchmark and is VERY prone to distortions. For example, yourdd
is writing zeroes - if something has a special case for data full of zeroes (e.g. because it can do compression) you will see fantastic performance but switch to writing non-zero "real data" and suddenly that performance can disappear...In order to answer your question (as in all benchmarking) you have really have to isolate the pieces to identify the bit introducing the issue. For example, is writing to the Windows filesystem directly (and not over iSCSI) also extremely fast? If you take the same hardware configuration and run Linux instead of Windows is it just as fast or does it slow down? What happens if you switch to using a benchmark tool like fio?
Sadly there are too many possibilities to be able to answer a question like this well...