for the 5 (physical) servers in our datacenter, I'm looking for a unified virtualization design. All 5 servers do not come with hardware virtualization support (older XEONs). Since we're an NGO on low budget, price does play a key role here.
The number of virtual machines needed varies with the choice we make for our deployment strategy (JBoss/PostgreSQL clusterin, Load-Balancing, etc.) In total, we need about 15, most of them with very low performance needs.
So far, we've made good experiences with XEN (open source), but we don't have any experience with unified management solutions for XEN environments (Ganeti, openQRM, Citrix XENServer, ...).
In case it is important: All servers are running on Linux software RAID1.
So my questions are:
- Is there a considerable performance loss of para-virtualized Linux guests vs. Linux directly running on the hardware (i.e. with no virtualization)? If yes, I think it would be possible to organize the servers in a way that the performance critical applications run on non-virtualized hosts. If I understand openQRM correctly, it would be possible to manage these servers also. What about other solutions?
- Which (free) management solution would you recommend? In particular, would it be possible to manage the whole system using XENServer? If, at a later stage, we should be able to afford new hardware (in particular, a shared storage solution), this solution should be able to support (live?) migration of guests from one host to another.
- I suppose all management solutions need a dedicated, non-virtualized management server? What if this fails? How to come around this reliability-bottleneck?
Thanks a lot for your advice!
Citrix Xen server provides a centralized type thing you are looking for. With the free version you are able to move a guest that is off from server A to server B. However, the XenServer management server is the part that costs money, it provides for High Availability, like hot fail-over and the like.
You might want to checkout Proxmox Virtual Environment (VE): http://pve.proxmox.com/wiki/Main_Page
It provides support for OpenVZ and KVM instances via a bare-metal Debian install. You wouldn't be able to use KVM but you could use OpenVZ.
They provide Proxmox VE Cluster which "enables central management of multiple physical servers". The central management is done via a web-based admin interface.
They also support live migration: "Proxmox VE support live migration of Virtual Machines via web interface."
I believe the entire solution is FOSS.
Cheers
"Best" is always very subjective, as is inexpensive, but for free the best options are Citrix XenServer and Oracle VM.
They're both based on Xen, both have management GUIs, various tools, etc. In terms of functionality of these commercial but free products, I think only Oracle VM supports both live migration and high-availability in it's free version.
For your other questions, the performance difference between virtualised and "bare-metal" performance is typically around 5%-10%, not a significant amount really compared to the management benefits. The hit is often in the area of disk I/O rather than CPU usage.
You're right that often these implementations have a single non-virtual management server, but generally there's no reason why they can't be virtual themselves, it's just that tracking it down in the event of a failure can be tricky - you end up asking the question "which box has it ended up on when it failed over?". Assuming you setup the correct failover rules, you'll be able to work it out without a problem, and with only 5 physical hosts it's not a huge issue to track it down anyway.
Proxmox won't work in case of only supporting 64bit! The 32bit installation process is partially described for an older 1.x Version of proxmox VE that actually is not comparable any more with today's versions.
Has there been a decision inbetween? By whom are your Xeon's now fired with virtualization?
Regards.