I have servers which deal with a large number of network packets. I've seen evidence that user space processes running on CPU0 have their performance impacted when network load is high. I'm fairly sure this relates to interrupt handling, specifically for network devices.
I'm thus experimenting with changing the affinity of network (eth) devices in order to test my hypothesis and see if I can improve performance.
I understand that in order to change IRQ affinity I must change the value in /proc/irq/XXX/smp_affinity
.
When I change the value in that file I can only move the interrupt handling from one CPU to another. For example, in a 4 core system I can set that value to 1, 2, 4 or 8 and I see the interrupts move from cores 0,1,2 and 3 respectively (by monitoring /proc/interrupts
). However, if I set the affinity value to any combination of cores then I don't see interrupts balanced across all cores. They always stick to one core. For example,
- Setting
ff
: CPU0 used - Setting
12
: CPU4 used
(Thus it seems only the lowest specified core is used)
I have stopped the irqbalance
daemon in case this was affecting things (although I suspect not as my understanding would be that irqbalance
would change the values of the smp_affinity
file and I don't see that).
My questions thus are:
- How do I balance interrupts across cores?
- Is it indeed even possible to do so?
N.B. I have tried this on 2 machines: A ubuntu VM running in VBox with 2 cores and with the eth device using IO-APIC
; A ubuntu real machine with 4 cores (8 hyperthreaded) and with eth device using PCI-MSI
.
As a bonus question:
- Is it possible to balance
PCI
interrupts? From what I understand it should be definitely possible withIO-APIC
interrupts.
0 Answers