I have a quad core and I have spotted on munin (monitoring interrupts and context switches), that my interrupts and context switches spiked to 25k a second while the average was 250 for some time.
No idea what happened, and also no idea what does it mean, except from the fact that it is an anomaly according to my monitoring tools.
This happened in one of my openVZ virtual container.
Note: At the same time, load spiked to 2.5 and CPU usage was at the same point 110% system, 15% user and 100% IOwait.
I have attached the output for /proc/interrupts on the host machine.
CPU0 CPU1 CPU2 CPU3
0: 48039108 56660082 56431151 51696624 IO-APIC-edge timer
1: 0 3 0 0 IO-APIC-edge i8042
4: 4 4 1 3 IO-APIC-edge serial
8: 1 0 0 0 IO-APIC-edge rtc
9: 0 0 0 0 IO-APIC-level acpi
12: 4 0 0 0 IO-APIC-edge i8042
50: 15 16 16 16 IO-APIC-level ata_piix
66: 11113 0 0 56276172 PCI-MSI eth0
169: 12839820 4849263 1080 1167 IO-APIC-level ioc0
225: 6 7 5 5 IO-APIC-level ehci_hcd:usb1, uhci_hcd:usb2, uhci_hcd:usb4
233: 0 0 0 0 IO-APIC-level uhci_hcd:usb3
NMI: 17173 16340 16694 17306
LOC: 214221117 214220936 214196385 214196306
ERR: 0
MIS: 0
It'll be a multi-threaded application doing a lot of locking. Everytime it locks, the CPU will pre-empt its quantum and allows another thread to have a go. You can write M/T apps that spend all their time sloshing between threads, none of which end up doing any useful work, and because they are causing all those context-switches, the CPU spends more time switching threads than the threads themselves get to do work.
See if there's any spikes in CPU usage for an app during these spikes.
This could be a numerical artifact that the monitoring system produced, caused by a very short time slice. Maybe it's just a sampling effect that you're seeing here.
Might it be that the linux kernel timer (
CONFIG_HZ
) is set to trigger regularly, at 250Hz? Check the kernel config file. There are other frequencies to choose from.