I need to purchase servers, and need to understand the right way to specify them.
Based on the answers to this question, there are several parameters to take into account:
- CPU speed + Number of CPU cores
- RAM + Virtual Memory size
- Hard disk size and performance
- Network interface card performance
I'm ignoring cost, size, and power for now. The operating system, if it matters, has to be Windows Server 2008 R2.
Because the final application is IIS based, I want to set up a series of tests that will help me characterize each of these individually, and two or more of them in concert.
For example, separately:
CPU: Write a simple in-memory mathematical operation. Spawn several threads that perform this operation. Measure system resources (Perfmon), FLOPS, number of threads as a function of time.
RAM / Virtual Memory: Write a memory leaker. Measure system resources as a function of time.
Hard Disk speed: Create a large data file on disk. Measure system resources and access times of randomly accessed parts of the file as a function of time.
Network speed: Use a tool like this one to see the performance of the network card.
So far, so good (though improvements appreciated!). One possibility is it just get the "Windows Experience Index" individual scores.
How do I go about characterizing interactions between these?
For example, the network <==> CPU interaction could be tested by writing a simple web service that just pumped data out as fast as it is asked for. Obviously, this is also characterizing IIS, but that's part of the eventual system anyway.
As another example, the network <==> hard disk speed interaction could be tested by using IIS to serve static large image files (with any caching turned off).
Does anyone know of a suite of tests that are used to characterize these sorts of performance?
Obviously, the only real way to see how a server will work with the real software is to deploy it. We haven't finished building it yet, so that's not an option (yet!).
There are plenty of existing benchmarks to characterise system performance in various ways. The "Windows Experience Index" is one such measure, but there are a variety of others (Endless CPU benchmarks, memory throughput measures, disk I/O in both streaming and random access modes of any number of permutations, actual useful NIC speed in bps and pps, with and without TCP, and bus speeds in all their infinite complexities).
However, none of this is of any use to you in answering the question, "how will my application perform?", because you don't know what the performance requirements and characteristics of your application are (because it doesn't exist yet). For example, the "Windows Experience Index" is a measure of how well various hardware components work at running Windows. The clever people at Microsoft measured the way that abstract performance benchmarks relate to the proper operation of Windows, and then used that to create their index. You cannot meaningfully take a number like the PassMark score (to take one random example) and turn it into a useful measure without knowing how much CPU time your application takes to process a request.
Now, if you're like 99% of the people who've asked this question before, and gotten this exact same answer, you'll probably experience some sort of "but... but... but... I need it!" reaction. The only possible response to that boils down to "Tough luck". The laws of physics don't care that we want to travel faster-than-light, and the laws of performance analysis don't care that you want to know the unknowable.
(And in case you're wondering, having to repeat this type of answer over and over isn't a lot of fun, which is why we don't like "what server do I need?" questions on Server Fault. Marking them as dupes doesn't work, because then people complain that the answers weren't dupes, because person A wanted to know how many Wordpress blogs he could run, while person B wanted to know how many Drupal sites he could run).