I need to purchase servers, and need to understand the right way to specify them.
Based on the answers to this question, there are several parameters to take into account:
- CPU speed + Number of CPU cores
- RAM + Virtual Memory size
- Hard disk size and performance
- Network interface card performance
I'm ignoring cost, size, and power for now. The operating system, if it matters, has to be Windows Server 2008 R2.
Because the final application is IIS based, I want to set up a series of tests that will help me characterize each of these individually, and two or more of them in concert.
For example, separately:
CPU: Write a simple in-memory mathematical operation. Spawn several threads that perform this operation. Measure system resources (Perfmon), FLOPS, number of threads as a function of time.
RAM / Virtual Memory: Write a memory leaker. Measure system resources as a function of time.
Hard Disk speed: Create a large data file on disk. Measure system resources and access times of randomly accessed parts of the file as a function of time.
Network speed: Use a tool like this one to see the performance of the network card.
So far, so good (though improvements appreciated!). One possibility is it just get the "Windows Experience Index" individual scores.
How do I go about characterizing interactions between these?
For example, the network <==> CPU interaction could be tested by writing a simple web service that just pumped data out as fast as it is asked for. Obviously, this is also characterizing IIS, but that's part of the eventual system anyway.
As another example, the network <==> hard disk speed interaction could be tested by using IIS to serve static large image files (with any caching turned off).
Does anyone know of a suite of tests that are used to characterize these sorts of performance?
Obviously, the only real way to see how a server will work with the real software is to deploy it. We haven't finished building it yet, so that's not an option (yet!).