I have my own email server, and it serves a couple of dozen users. I need to replace it right now, and I want the replacement to be a virtual server image running under a hypervisor.
My plans for the new server system include the following:
Run only free, open-source software.
Run at least three virtual images: email server, HTTP server, and SSH server. I plan to run a web-email system (such as SquirrelMail).
Hypervisor OS will be Debian Stable (which, right now, is Debian 5.0 "lenny"). Guest OSes will also be Debian Stable.
Software RAID using the two hard drives in a mirroring (RAID 1) configuration.
I need to get the hypervisor and the email guest image up and running as soon as possible, because I am worried that my old server may be about to have a hardware failure. (It is rebooting itself about three times a day!)
This is my golden opportunity to set things up right for the future. What is the perfect setup? How should I configure my system?
My major questions:
Should I use KVM? I was planning to use Xen but I have seen, in other ServerFault questions, some people recommending KVM as the best choice for the future. I need something stable and reliable now, and I need to get it working quickly... if Xen is more stable or if KVM is tricky, I can go with Xen for now. (Debian will not soon drop support for Xen!)
Should I use LVM with my hypervisor, or leave that out? I tend to like things to be as simple as possible, and LVM seems like it would add another whole layer of complexity; but on the other hand, I think it is stable and mature by now, and perhaps the flexibility will be valuable if the needs of my virtual server images change.
Is there some GUI or web-based tool I can use to administer KVM/Xen? My current email server doesn't even have X11 on it; I only administer it via SSH.
Any other advice or tips would be gratefully accepted.
In case you want to know about my hardware, here are the important basics:
AMD BE-2300 chip (dual-core; does support AMD-V virtualization instructions)
4 GB RAM
two identical 250 GB Seagate hard drives
Honestly, I don't see what benefit you would get using any virtualization technology.
In my opinion, virtualization is a cool technique which doesn't fit everywhere, and going virtual just because everyone does isn't a good idea (again, in my opinion).
Since you are running the same OS, Linux, on both the host and the guest VMs, I suggest that you choose from User Mode Linux, or OpenVZ.
UML began life as a modified Linux kernel that could be booted up as a user mode process. It has been widely used in hosting companies, and by people who need to emulate large numbers of VMs on a single server. OpenVZ comes from a more enterprise background and is more modeled on Solaris containerisation. The idea is that you can partition your system and install software in a container that does not effect the rest of the system. To remove the software, simply delete the container.
Have a look at the two websites before you decide. I think that OpenVZ seems to be a better fit for you, but a lot depends on your future plans, i.e. it is best to choose the one that you might be able to use for work in the future.
Both UML and OpenVZ are quite different from XEN and KVM. In a nutshell, XEN and KVM are full-blown virtualisation hypervisors that allow running any operating system supported on x86 hardware, but UML and OpenVZ are an extension of the concept of a chroot jail, that allows the isolation of different Linux processes. If you plan to stick with Linux only, then it's best to avoid the complexities of XEN and KVM.
So OpenVZ and UML extend the functionality of a Linux system, but XEN or KVM let you turn it into a non-Linux system that runs MS-Windows, FreeBSD, OpenSolaris and others.
If you want to run XEN, then you should run a distro that fully supports it such as OpenSUSE 11.
My old server finally died. I had to bring up the new server in a hurry.
So I went ahead with my original Plan A, and used Xen.
Here is my setup. I don't know if it is "perfect" but this is what I figured out:
The server has two identical hard disks, partitioned like so:
The /dev/md1 device is, in turn, formatted as a Linux LVM partition.
GRUB is installed in /boot, which is a plain ext3 partition.
The Dom0 system is installed in /dev/md0, which is also a plain ext3 partition.
A "rescue" system is installed in partition 5, also a plain ext3 partition. This is a complete bootable Debian, and in fact was the first thing I installed; I installed the rest of the disk from that system.
Both disks have GRUB installed and the "rescue" system. It should be possible, in an emergency, to boot some sort of Linux system from one of the two disks, in order to fix a problem and get the server going again.
At first I tried to use the "libvirt" tools for Xen, such as "virt-manager". Based on my experience, I must say that "libvirt" is half-baked in Debian 5.0 Lenny, and I do not recommend it.
I then turned to the older tools, the "xen-tools" stuff; in particular, "xen-create-image". Because my users all have a Maildir setup (one file per each email) instead of an mbox setup (one file per email folder), I tried to use ReiserFS. xen-create-image created the image just fine, but it wouldn't boot. I decided to use XFS, and that worked.
(I'm not actually sure that XFS is much better than ext3 for a many-small-files setup, but as I said, I did all this in a hurry after my old server died.)
The two main reasons I decided to use LVM for my Xen images:
Performance. I found several web pages that said Xen performs better when its images are on LVM, compared to images in files on a file system.
Ease of resizing. I'm starting my virtual machines with small images, and I can grow them if I need to.
The BIOS on my new server has a feature where you can hit F8 during booting, and then pick a boot device. I have used this to test that I can boot with GRUB from either of my two hard disks.
My old server didn't even have X11 installed. I decided to install a GNOME desktop on the new server, hoping I could use cool GUI tools like virt-manager. I discovered that 4 GB is not very big for a modern GNOME install; everything fits but there isn't much free space. If I were starting over, I'd give 10 GB for the Dom0 OS on /dev/md0. If I really get crunched for space, I can probably move /usr/bin into a new volume made under LVM.
The Dom0 is installed on a RAID volume, but not LVM. I read some comments about some kernels having difficulty booting from LVM, so I just kept things simple.
I really recommend putting a tiny "rescue" system at the end of your hard disk. Then, don't even mount that system in your main system, so that berserk processes (like
rm -rf /
) can't clobber it. There are many problems that can be easily solved by booting a working system, mounting the volume with the damaged system, and then fixing something.Thank you to everyone who gave me an answer.
I've done something similar, except I used Centos.
The other comments about attackers reusing passwords to gain access to your other hosts is something to worry about. My answer: use ssh keys only, and make sure you don't allow ssh key forwarding from your ssh client. (this would allow a hostile root user on a host you log into to reuse your session keys to attach to another host as you) Also, if you have different root passwords per box, you should think about setting your cli prompt or color scheme per host to remind you which host you are on so you don't accidentally type a different host's root password.
4Gb is a good amount for the number of hosts you're talking about, but its so easy to setup additional hosts for misc uses that I bet you'll bump up against that limit before too long.
We've had a positive experience with KVM managed by virt-manager, connecting with qemu over ssh. It's been very easy to set up, configure, modify, destroy, and just all-around play with guest OSs. This option also works with Xen, so you can use the same commands with either hypervisor.
I used to do kvm on it's own with debian stable but after a while I switched to using proxmox-ve (web management). I learned a ton doing it myself but I got lazy. The good thing about proxmox-ve is that it allows you to use both kvm and openvz at the same time with 802.1q vlans and the like (vlan in kvm talk is different from 802.1q unfortunately). The biggest diference between the two (openvz and kvm) is that since kvm uses it's own kernels it has worse performance. Also, you need to have hardware virt for kvm to work.