After a few comments and answers and thinking, I hope I can now add a
TL;DR : If I want full performance(a) and (simple) HW failure redundancy, does it make any sense to go for a virtualization solution with more than one guest per hardware-box?
(a) -> parallel C++ builds with expected (very) high CPU and disk utilisation
Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me.
Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.))
Therefore (and for scaling purposes) we would like to go virtual with these machines.
Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts.
Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones.
And here begin my questions:
- Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs?
- That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or??
- Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense??
- Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.)
As stated above, I'm starting to think it doesn't make sense to go for more than one guest per hardware-box performance wise. Are there other considerations?
Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.
I can only answer last part,
IF you are going to use [ONLY] windows for virtualized guests , then HyperV will be best VM Option available for you due to high performance virtualization for windows OS.
Same applies to XEN for Linux OS virtualization.
I'm not familiar with Hyper-V but looks like it has licensing costs. The next version of Proxmox is going to have High Availability so if you converted your two existing hosts into Proxmox hosts, invest in a little NAS/SAN (e.g. Synology) you can have a pretty decent setup for modest hardware cost (Proxmox is open source). (But note this setup doesn't include backup either.)
Note that you'll want to use the virtio-win drivers for your Windows guests.
Hope this helps.