we are running a 10Gbit server. Testing whith iperf works perfect: [ 3] 0.0-10.0 sec 10.6 GBytes 9.15 Gbits/sec
But when using rsync (with ssh on another port: rsync -Pe 'ssh -p xxx') the bandwidth is poor: 8,589,918,208 100% 129.04MB/s 0:01:03 (xfr#1, to-chk=0/1)
What could cause this limitation?
Thanks
(this should be a comment, but its a bit long)
It is very unlikely to be caused by the cipher suite (if this were the bottleneck then you'd see one of CPUs saturated). Nor is it likely to be your disk I/O (this is easy to determine experimentally using tools like fio or Bonnie++ - don't use 'dd' as that just measures streaming writes)
Rsync is a rather chatty protocol; throughput will be limited by your RTT. While its easy to increase bandwith, it's difficult to decrease latency. You could confirm this using tc and increasing the latency / measuring the impact.