I'm using driver e1000e for multiple Intel network cards (Intel EXPI9402PT, based on 82571EB chip). The problem is that when I'm trying to utilize maximum speed (1GB) on more than one interface, speed on each interface starts to drop down.
For one interface I get: 120435948 bytes/sec.
For two interfaces I get: 61080233 bytes/sec and 60515294 bytes/sec.
For three interfaces I get: 28564020 bytes/sec, 27111184 bytes/sec, 27118907 bytes/sec.
What can be the cause?
EDIT: /proc/interrupts content:
CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
106: 17138 0 0 0 0 0 0 0 PCI-MSI eth0
114: 51 0 0 0 102193 0 20 23745467 PCI-MSI eth2
122: 51 290 15 271 0 9253 100 0 PCI-MSI eth3
130: 43 367 0 290 105 39 15 0 PCI-MSI eth4
138: 43 361 105 210 0 140 0 0 PCI-MSI eth5
146: 56 67625 100 0 0 17855245 0 0 PCI-MSI eth6
It won't be the driver.
It's most likely to be a physically shared component, such as interrupts or the PCI bus.
Are they sharing the same interrupt (IRQ)? This is probably your bottleneck.
What is the endpoint of your iperf test? If you are routing through network hardware or combining all output to a single GBe NIC on another machine your bottleneck may be remote.
I've posted some sysctl magic here. You can try it, see if it helps
PS. How you benchmarking speed?