I'm interested in modelling various server/network configurations for a web application. I'd like to know in advance which parts of the system are going to be bottlenecks, and whether the bottlenecks are bound by CPU/Memory/Network etc.
One think I've been thinking about is taking a single test server, and setting up each 'real' server as a virtual machine on this, configured as they would be in the wild. I'm going to try this, but wanted to ask the serverfault comunity if anyone has tried this approach before. Is it viable?
I'm not expecting benchmarks, or anything like that of course, but am thinking it might be useful for modeling relative performance, highlighing bottlenecks, and providing a sanity check on the architecture.
All that will be good for is sanity checking your overall architecture, and ensuring that all the bits involved are able to get where they need to go.
Other than that, hardware and network will be significantly enough different than production that any performance issues or bottlenecks you find may or may not be present in production.
The only way it can be viable and produce worthwhile results is if each virtual machine has an identical structure and resources to the production servers. Whether that's achievable or not only you can say, as we know nothing about your infrastructure.
I've never done this on the scale you're talking about but have used virtual machines to replicate production web servers and thereby experiment with configurations off-line before implementing changes on the live system.