I'm trying to utilize maximal bandwidth provided by my 1GiB network card, but it's always limited to 80MiB (real megabytes). What can be the reason? Card description (lshw output):
description: Ethernet interface
product: DGE-530T Gigabit Ethernet Adapter (rev 11)
vendor: D-Link System Inc
physical id: 0
bus info: pci@0000:03:00.0
logical name: eth1
version: 11
serial: 00:22:b0:68:70:41
size: 1GB/s
capacity: 1GB/s
width: 32 bits
clock: 66MHz
capabilities: pm vpd bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation
The card is placed in following PCI slot:
*-pci:2
description: PCI bridge
product: 82801 PCI Bridge
vendor: Intel Corporation
physical id: 1e
bus info: pci@0000:00:1e.0
version: 92
width: 32 bits
clock: 33MHz
capabilities: pci subtractive_decode bus_master cap_list
The PCI isn't any PCI Express right? It's a legacy PCI slot? So maybe this is the reason?
OS is a linux.
80 MB / second is actually pretty good! That's about 640mbps, which is pretty darn close to the gigabit capacity of the NIC. If you take into consideration the TCPIP overhead, and disk speed you're probably at your maximum speed.
Try putting this to your /etc/sysctl.conf
Each connection we make requires an ephemeral port, and thus a file descriptor, and by default this is limited to 1024. To avoid the Too many open files problem you’ll need to modify the ulimit for your shell. This can be changed in
/etc/security/limits.conf
, but requires a logout/login. For now you can just sudo and modify the current shell (su back to your non-priv’ed user after calling ulimit if you don’t want to run as root):Another thing you can try that may help increase TCP throughput is to increase the size of the interface queue. To do this, do the following:
You can play with congestion control:
There is also some low level tuning, e.g. kernel module parameters
And even lower level hardware tunings accessible via
ethtool(1)
.PS. Read kernel the docs, especially Documentation/networking/scaling.txt
PPS. While tuning TCP performance you may want to consult with RFC6349
PPPS. D-Link is not the best network hardware. Try Intel hardware with pci-x or pci-64
Your 32-bit, 33Mhz PCI bus can transit a maximum of 1,067 megabits per second (Mbps) or 133.33 megabytes per second (MBps).
Gigabit Ethernet can transit 116 megabytes per second (MBps).
So although you card should be able to fully saturate the line you'll actually only ever get about 90% utilisation because of various overheads.
Either way if you're getting 80 megabytes per second (MBps) then you're not far off and I would be reasonably happy with that for now.
Gigabit ethernet is just over 1 billion bits per second. With 8/10 encoding this gives you a maximum of around 100MB per second. A 32 bit PCI bus should be able to put 133MB/sec through and you should be able to saturate it (I can demonstrate saturation of a PCI bus with a fibre channel card and get a figure close to the theoretical bandwidth of the bus), so it is unlikely to be the cause of the bottleneck unless there is other bus traffic.
The bottleneck is probably somewhere else unless you have another card using bandwidth on the bus.
Bottle necks at GigE speeds can come from a number of places.
How sure are you that it is the card that is the bottleneck? It might be that is the best speed it can negotiate with the device on the other end so it is stuck waiting. The other device might be stuck running at 10/100 speeds so 80 would be about right with a bit of overhead.
After my long lasting research I post my conclusions:
In my experience 80 MiB/s is pretty good. I've not seen much higher speeds no matter what combination of NICs and switches are being used. I remember 100 Mbps showing much the same behaviour. 70-80% utilization was pretty much all you could ask for, though I see gigabit equipment running above 90% in 100 Mbps mode these days.
By comparison, my very first gigabit configuration at home, based on SMC switches and broadcom integrated NICs could barely manage 400 Mbps. Now, years later and using Netgear management switches along with Intel and Marlin NICs I usually find myself in the range of 70-80 MiB/s sustained transfer.
If you need more, consider bonding multiple interfaces.
your getting very good speed if can check on your switch end
http://www.cisco.com/en/US/tech/tk389/tk213/technologies_configuration_example09186a0080094470.shtml