I have two machines:
1) 2 x E5-2620 v3 @ 2.40GHz (microcode: 0x3c 2018-01-19), 8GB RAM (CentOS 7 w/ 4.11 kernel)
2) 2 x E5-2630 v3 @ 2.40GHz (microcode: 0x3c 2018-01-19), 64GB RAM (Fedora 22 w/ 4.11 kernel)
The kernels are custom compiled but otherwise without any modification. The .config
for both builds are exactly the same (and neither have kpti
since it was introduce on 4.14 iirc).
I wrote a code like:
beg = rdtsc_beg();
for (i = 0; i < 1000000; ++i)
{
syscall(512); //ENOSYS
}
end = rdtsc_end();
printf("syscall: %lu cycles\n", (end - beg) / 1000000);
to measure the cost of a syscall. I get 99 cycles on machine 1 and 264 cycles on machine 2.
I can not understand why there is such a huge difference given almost anything that I think is relevant being the same.
Any ideas what might be causing the difference or any clue where I should be looking to find out the cause?
Edit:
I changed the code to:
volatile int ret = 0;
beg = rdtsc_beg();
for (i = 0; i < 1000000; ++i)
{
ret++;
}
end = rdtsc_end();
printf("inc: %lu cycles\n", (end - beg) / 1000000);
And compiled statically:
400a3f: 0f a2 cpuid
400a41: 0f 31 rdtsc
400a43: 89 d6 mov %edx,%esi
400a45: 89 c7 mov %eax,%edi
400a47: 48 c1 e6 20 shl $0x20,%rsi
400a4b: 89 ff mov %edi,%edi
400a4d: b8 80 96 98 00 mov $0xf4240,%eax
400a52: 48 09 fe or %rdi,%rsi
400a55: 0f 1f 00 nopl (%rax)
400a58: 8b 54 24 0c mov 0xc(%rsp),%edx
400a5c: 83 c2 01 add $0x1,%edx
400a5f: 48 83 e8 01 sub $0x1,%rax
400a63: 89 54 24 0c mov %edx,0xc(%rsp)
400a67: 75 ef jne 400a58 <main+0x28>
400a69: 0f 01 f9 rdtscp
400a6c: 41 89 d0 mov %edx,%r8d
400a6f: 89 c7 mov %eax,%edi
400a71: 0f a2 cpuid
The result is 5 cycles on Machine 1 vs 14 cycles in Machine 2.