I've got a dual xeon e5504 server, with [for now] only 8GB of ram. Storage is'n impressive either: 3x 146GB sas in raid5 + 500GB sata drives. Currently it works as a development server, but it's over speced for our needs and since our development methods changed through last 2 years we decided it will work as a production system for some of our applications + we would like to have a separate system for testing/research. Our apps are mainly web apps deployed on tomcats [plural as some of the apps require older versions] and connected to Postgres.
I would like to have a production system, where only httpd+tomcat+db are setup and nothing else runs there. Sterile system. Apart from that, I would like a test system, where I can play with different JVM settings, deploy my test apps, play with tomcat/httpd settings and restart them without interfering with the production system.
Apart from that, I would like to be able to play with different linux flavors, with newer kernels to test how they work etc. I know, this is not possible with OpenVZ and I would have to choose KVM for that. I am thinking about merging the two, and setting up a KVM to be able to work with different systems [linux only to be frank] + use openVZ to setup separate machines for my development needs. I would simply go with that, but reading here and there about the performance impact full virtualization has over containers and looking at the specs of my server makes me think twice about it. I don't want to loose too much performance, especially because of the nature of my apps [few JVMs running at the same time].
It will be my first time with virtualization, apart from using desktop virtualbox/vmserver. Although I am a fast learner I don't want to mess with the main system so much that it will break the production apps or make them crawl. Although they are more or less internal apps and they don't produce much load, they need to be stable.
I've read, that KVM host is a normal linux installation and it allows to run normal processes on it. If that is so, does it allow to run openVZ as well? I mean... can I have KVM and OpenVZ running on the same system/kernel? Or do I have to setup another system to run OpenVZ containers? How much performance impact can this have for me? Will my hardware suffice?
oh and one more thing... unfortunately I'm quite limited with the funds... I'm looking for a free solution only :/
The difference is the virtualisation approach, KVM is a kernel module (hence the name) that runs as part of the linux kernel, that allows for either native virtualisation or paravirtualisation. You're running a complete system environment.
Openvz runs the OS in containers - similar to BSD jails. Its more efficient when all your VMs are running the same OS (say linux), and effectively each 'guest' is an instance of an OS thats chrooted inside the main OS.
KVM will give you significantly more flexibility in terms of OS - you can run a windows or fbsd VM inside a KVM host. However, each VM will need to have hard disk images and memory allocated to them.
Openvz is better when you want to run a mass of linux VMs and need to be able to flexibly allocate resources between them. Since the systems are all using the same file system, kernel and memory space, a container dosen't need memory for its exclusive use.
It sort of depends on what you are doing. OpenVZ is excellent from a hosting perspective because it's a very efficient use of hardware. All unused resources are available for use. So you can have higher container density and that can get passed on to customers in the form of lower per container prices. With KVM any memory or HD space that the customer is not using is wasted. You therefore cannot put as many customers on or need more RAM/HD space etc. which gets passed on to the customer as higher prices.
From the user perspective they are getting more bang for the buck with OpenVZ because there is no memory used for the kernel and kernel modules and hardware utilities. It ends up being quite a bit of memory savings. With basic server, core, and base groups installed I see about 15MB of memory use on an OpenVZ container not including cache/buffers as opposed to over 100MB on KVM not including cache/buffers. Both using CentOS 6 64bit and measured immediately after a reboot. So a 512MB VPS plan from an OpenVZ provider actually gives the customer 100MB more memory than a 512MB KVM plan.
The other benefit of OpenVZ is that it's very low overhead. Probably not much more than 2 or 3% above running on bare metal. In fact because it's such low overhead we usually put dedicated server customers in a single OpenVZ container because it's so much easier to manage. If they want to migrate to a different data center or upgrade the hardware it is as simple as point and click, just migrate the container to the new hardware and your done. No re-install re-configure required. KVM is not nearly as efficent although I don't have any comparison numbers. I can just tell from using it and watching load averages. I would say maybe 10% more overhead. So 12-13% overhead vs 2-3%. A lot of people seem to think KVM overhead will improve over time but I haven't seen much change in the past couple years. I think at first there were improvements but it's matured quite a bit in the past couple years and the performance improvements have flattened out.
Only major downside of OpenVZ is the customer cannot access or do anything to the kernel. Some things they need done can be done by the provider or set up for customer access in control panels like what SolusVM does with TUN/TAP and PPP. The vast majority of hosting customers do not need access to the kernel.
I've tried Proxmox out and it seems like the best solution for me. I'm very happy after few days of usage and I hope for the best. Mind you, it's an opinion based on few days usage, but I haven't found anything else/easier to serve the purpose.