My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load.
Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers.
Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is.
Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem?
EDIT:
Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.
Given their requirements sound a bit 'wooly' and are actually quite low I'd be strongly tempted to virtualise this. I'd start with just two blades and some shared storage, then you can create, modify and delete their VMs as required, you'll lose very performance and gain a huge degree of flexibility, plus you can scale-out linearly and with no user impact.
I think that the sane way would be to identify where the crucial bottlenecks are or may occur. VMs are great for isolation and depending on the kind of hypervisor used, it may have little impact on the actual performance. Virtual networking would probably work better than physical networking as well.
However, I would recommend having a few physical machines due to redundancy. If you have one physical server with a million VMs in it, when that one physical server dies (and it will) it will take a million VMs down with it.
Never put all your eggs in one basket!
Are they talking about providing you with a single blade chassis? Because if so, that's still lots of individual servers, just contained in a housing unit. If they're literally talking about one beefy server to run the lot, they (probably) wouldn't be talking about blades.
Anyway ignoring the blade thing,here's my perspective: If your app scales happily across multiple smaller servers, do that. Smaller servers are cheaper to purchase, you can easily scale sideways by adding more servers, and the individual apps will tend to work more reliably if they have a server pretty much to themselves.
However, there are extremes. It's pretty common to split architecture into at least 2 layers (front-end and database), or 3 layers (front-end, application, database), but unless you're creating an absolute monster of a system, you don't often need to go beyond that.
Can you provide any more info about the system you're developing? What kind of platform you're using, OS, language, user base, development lifecycle?
EDIT: Based on your edit a further question comes up - What's the limiting factor of your current configuration? Are you running out of RAM, or are the disks not reading fast enough, or are you having trouble pulling records out of the DB quick enough? maxed out CPU? The limiting factor on what you have right now, should be the main steer for where you need to go with this.
No It's not the same. depending on the virtualization platform there will be overhead (anywhere from 5-30%) from the hypervisor layer.
It's tough without more specifics but here are a few generalities
Advantages:
Disadvantages:
In your particular case 1 system might be OK if your service will be dead should any one of these systems be down. You haven't added any risk in that case. It also sounds like those systems will not need 6Ghz of processing power, in which case consolidation will certainly save money since you wouldn't buy 6ghz just for those 3 machines. The other thing you need to watch for is (and certainly not least in consideration) are I/O requirements and potential I/O contention.
As an alternative, if these web services can co-exist on the same machine you might also consider making 2 VMs with the same services and either use clustering or simply keep one on standby should the be an issue.
For URLs take a look at
Windows Server Virtualization Guide
The Shortcut Guide to Implementing Virtualization in the Small Environment
A single server setup can either be beneficial or very bad. You can maintain a single server easier than you can maintain 3, and when it goes down everything goes down with it.
With a multiple server setup you can compartmentalize, but this might be useless if one of your 3 servers goes down and the other two rely on it to function, as you'll be in the same spot.
They could be wrong, depending on:
Architecture is not as simple as math. But we can't advice you more without more details
Capacity requirements are best determined through load testing and empirical data. Don't rely on predictions or inspired guesswork.
Availability is another matter. Sure, multiple servers are better than one. There may be other factors that influence the architecture. Primarily cost, especially if this is not open source and there are per-server license fees or you are using third-party components that require royalty payments.
Blade servers do not make good virtual servers. You are severely limited on bandwidth and memory, the two biggest demands in a virtual environment. We tend to run around 4 vituals per core. Even with the best blade chassis you are limited to 40Gb of network connectivity per blade. If you are running 32 high-performance web virtuals, even with over subscription you will still be maxing out your connections per virtual. If you do get 40Gb of connectivity to a blade you will need to sacrifice a storage networking and go with a converged fabric, which takes away from the needed web bandwidth. A dell 910 or an HP 585 make good virtualization platform. 1 server with 24 cores, 140GB of network running 96 virtuals.
Next you want to right size the virtuals. Never give a virtual more than 1 core, always scale out. If you add additional cores, cpu polling causes and artifical load and locks. For a good apache virtual we run 1 core, 1.8Gb mem and ~4gb of network. Under load the virtual sticks at about 2ghz, and the server reports a load of 1.28.Our database virtuals run 1 core, 6gb of mem and ~4gb of network.
Always separate applications; you can not properly tune a virtual if you keep adding variables. A single virtual should have a single role. It should perform one thing very well. DMZ. The DMZ should be on separate physical virtual platforms. For regulatory requirements and protection.