VMWare ESXi 4.1
Guest: Linux kernel 2.6.32 64bit (tried older as well)
In multi-CPU virtual machine all interrupts (eth0,ata_piix) are stuck to the single CPU. When running on the pure hardware, everything is balanced ok.
Tried E1000 network driver as well as paravirtualized VMXNET3.
Any suggestions? Is it impossible to get a normal irq affinity in the virtualized environment?
Sorry but this is something of a pointless question, the virtualised hardware model is just that, virtualised. The interupts aren't real, the adapters aren't real, any 'balancing' you do isn't real, any overload of vCPU 0 for this isn't real. There's no way to do this, in a stable way, without having two dedicated NICs passed-through using VT-d and configuring it appropriately.
If possible, could you denote some of the symptoms? Is there a noticeable performance degradation (1 vCPU pegged) when its doing its heavy interrupt? I'm not sure the official VMware answer but they rely heavily on the fact that all sorts of 'magic' happens unbeknownst to the guest OS so this may just be an abstraction of sorts.
With using e1000, it's possible to pin those interrupts to a single core with "echo 3 > /proc/irq/$irq/smp_affinity" (3 == 00000010 == core1). with vmxnet3, it stays at core0 whatever I do. Whatever's virtualised or not here, the CPU load on Linux with soft interrupts and friends eating close to 100% is very real indeed.