My server has 24 CPU cores, 96G memory, installed CentOS 7.2 x86_64.
After starting my program with a large data set, my program will use about 50G memory, and the Linux system will show a high rate of system interrupts, but context switching rate will be low. dstat
will show somewhere between 500k int/s and 1000k int/s. CPU usage will be close to 100%, about 40% us, 60% sy.
If the data set is small, the program will use about 5G memory, and everything will be fine, CPU usage 100%, about 99%us, 1% sy. It's expected.
The program is written by myself, it's a multi-thread program. It doesn't do any network IO, very little disk IO, mostly memory operations and arithmetic. The thread model and the algorithm are the same regardless of the data set size.
My question is, how can I find out exactly which interrupts are used the most by my program (and get rid of them to improve performance if possible) ?