I got a 10GBit Server with proxmox 2.1 installed. There's an OpenVZ container running with Ubuntu 10.04 64bit. I can get 200-300MB/sec with wget from data-center internal servers (-O /dev/null) which is okay. However if there are multiple connections, it's not possible to get more than 800MBit/sec in total.
I have an apache2 webserver running which works as a proxy and accesses the data from several other (not only data-center-internal servers). Its only proxying through, so the HDD don't get accessed (beside of the logfiles of course). As this server has two SSD's in it, it is very unlikely that this is the reason. As soon as 800MBit are reached, I only get 300kb/sec from data-center internal servers with "wget -O /dev/null"", even on the host system itself, not within the OpenVZ container.
I noticed that the process ksoftirqd/0 is requiring heavy CPU load (up to 100%):
4 root 20 0 0 0 0 R 78 0.0 67:38.99 ksoftirqd/0 # (78% in this case)
Sometimes also the process events/0 requires many CPU power and it seems to compete with ksoftirqd/0 for the CPU power:
35 root 20 0 0 0 0 R 50 0.0 18:04.63 events/0
4 root 20 0 0 0 0 R 50 0.0 24:37.64 ksoftirqd/0
The output of top about the CPU power looks like this:
Cpu(s): 13.9%us, 7.3%sy, 0.0%ni, 64.0%id, 0.2%wa, 0.0%hi, 14.6%si, 0.0%st
As you can see, si is very high, which should normally stay on 0. Also the hard disk isn't the bottleneck. (0.2% wating).
I also got other 1-Gigabit-Servers with the same configuration. They even reach 900MBit/sec which is okay for a 1GBit server. However there's also a high load of ksoftirqd/0 (60-70%).
Currently there are about 300MBit going through the 10GBit server and the process ksoftirqd/0 stays at 0% CPU.
However later in the evening when the traffic increases, it will go up to 100 and cut the bandwith at 800MBit.
Maybe there's a problem with OpenVZ? I could also install ubuntu directly on the server without proxmox just for testing, but it would be better if I could stay with proxmox/OpenVZ.
However it's important that I'm able to use at least a few GBit with this server as soon as I need it. Of course also with multiple connections.
As far as I can read the output of lspci -vv
, my card supports MSI-X.
The system has an Intel® Xeon® Processor E5606 (8M Cache, 2.13 GHz, 4.80 GT/s Intel® QPI)
processor
This is my network card:
root@ns231828:~# lspci | grep -i network
02:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
I also uploaded the sanitized output of lshw
here: http://jsfiddle.net/R8QgL/
This is also the content of lspci -vv
: http://pastebin.com/7PqqbzHM
i have a "testing environment" with a little batch script that downloads big speed test files from a lot of servers (in fact i took just the ubuntu iso mirror list). i am able to reach 2 gbit in that environment.
after i tried ethtool -C eth2 rx-usecs 1022
i was able to reach 3.5 gigabit.
however with optimized rx-usecs settings (i tried the values 1 and 1000 as well, 1022 turned out best) in a live environment with real clients this only gained about 100 mbit.
i uploaded the other outputs of ethtool -k and ethtool -c here: http://pastebin.com/rEtmuV6i
Has anyone experienced similar issues or has a hint how I can address this problem?
0 Answers