I am looking for a rough value to compare context switches between Windows and Linux (same hardware and similar load assumed). I found that Windows seems to have a smaller range of timeslice lengths than Linux (10-120ms vs. 10-200ms) but that information was not authoritative.
I don't see what useful information there is to be gleaned from what you're trying to do, but you can change your clock resolution on Windows machines through the standard Win32 API. Different applications on the system that demand higher response times (such as multimedia apps) do this all the time. The clock resolution might be anywhere from 0.5ms up to 15.6ms and beyond. So make your two machines have the same clock resolutions.
Windows 7 is configured by default to let threads run for 2 clock intervals before another scheduling decision is made. (I.e., do I switch context or not?) Server 2008 R2 by default is set for 12 clock intervals between thread scheduling decisions (also known as thread quantum.) The idea is that with longer thread quanta, the Server OS has a better chance of starting and completing a client request without being interrupted. (I.e., less context switching.) But you wouldn't get as "snappy" of a desktop experience on a Server version of Windows. (Which in general no one cares.)
Here's an example using my Win7 PC. Google Chrome has actually called for a lower system-wide clock resolution of 1ms. You can use clockres.exe from Sysinternals to see your current and base clock resolution, and powercfg.exe to see which applications have been changing your clock resolution.
My CPU completes 3,501,000,000 cycles per second (3.5GHz,) and the timer fires every 0.001 seconds. 3501000000 * 0.001 = 3501000 CPU cycles per clock interval.
1 Quantum Unit = 1/3 (one third) of a clock interval, therefore 1 Quantum Unit = 1167000 CPU cycles.
Assuming that at a rate of 3.501GHz, each CPU cycle is 286 picoseconds, that works out to 333.8 microseconds per quantum unit. Since my PC is configured for thread quantums of 2 clock intervals, and each clock interval is 3 quantum units, that means my PC is making a thread scheduling decision about every 2 milliseconds.
Let's not even get in to variable-length thread quantums or the preemptive scheduler (thread doesn't get to finish its quantum before being preempted by another thread of higher priority.)
So your experiment of wanting to compare context switching on two different operating systems running entirely different sets of code still makes no sense to me, but maybe this helps, at least on the Windows side.