We are new to virtualization and we are planning to turn our online server into a virualized one, mainly for maintenance, backup and recovery improvements.
Initially we would only have one real virtual system with load plus 1-3 copys for testing and recovering and maybe a small centralized syslog virtual machine. We would like, if possible the host machine to include an iptables plus rsync to back up to other machines and some other global security systems.
Due to this and the offerings of our hosting supplier we are mainly considering Proxmox for its simplicity (we like the idea of its web admin panel) and as I also understand that the container approach of OpenVMZ systems may fit well resource wise with our setup. The base system comes with debian so we can personalise it to our requirements.
Proxmox installations default installs an LVM partition for the VMs. Our doubts are with the fact of what would be the best partition structure for this considering that:
- we would like to have a mirror of the root partition we could boot from if required (our provider supports booting the system from another partition via control panel)
- we ideally would like to have a partition that could be shared among the VM systems. We still don't know if this is possible directly with OpenVMZ containers, otherwise we are considering doing this by sharing it via NFS on the host machine.
- we want to use the backup system available on the proxmox host administrator to programme VMs backups and then rsync it to another machine.
With this based on a Linux Raid of aprox (750Gb) we are considering something like:
ext3_1/ - (20Gb)
ext3_2/bak_root - (20Gb) mostly unmounted, root partition sync
LVM_1 /var/lib/vz - (390Gb) partition for virtual images
LVM_2 /shared_data - (30Gb)
LVM_3 /backups - (300Gb) where all backups would be allocated
Our initial tests with Proxmox seem to have issues with snapshots backups like this, perhaps caused by the fact that they can not be done to another LVM partition (error: command 'lvcreate --size 1024M --snapshot --name vzsnap-ns204084.XXX.net-0 /dev/pve/LV' failed with exit code 5) in which case we might have to use a standart ext3 partition (but unsure if we can do this with the 4 primary partition limitations).
- Does this makes more or less sense?
- Would it be mad to for example write VMs /var/logs to a NFS mounted partition (on the host system)?
- Are their any other easier ways to mount host system partitions (or folders) to the VMs?
After a while and some experience... if you are installing Proxmox 1.5 from the CD... you wont be able to do much of these as the installer would just decide for you ;( I ended up letting it as I managed to use the HW raid of my motherboard.
The only alternative seems to be installing your own debian with whatever partitioning and the include proxmox repositoris for install.
Instructions on the proxmox VE wiki. As for shared data "bind mounts" for openvz containers work well as I understand you can for KVM machines.
The best install for Debian is the Lenny (currently 5.0.6) net install. Install the base, with no extra packages. I always in stall on Raid 1 for single drives - linux software raid works fine, but hardware raid is better.
I always add "trap clear 0" to /etc/profile which clears the screen on console logout. Ubuntu has this, but not debian.
I have 22 proxmox machines on a cluster install this way and have never had an issue.
If you need alot of disk space and don't have a budget for an iSCSI SAN, try using open-iscsi on a server(s) with lots of drives on a fast raid. It is alot faster then NFS and costs much less that FC solutions.
The ProxMox Wiki does not detail a recommended Debian Install. Which CD to use (there are 31 CDs for Debian 5.04). Also what packages to install before getting to the proxmox part. What partitioning is recommended and howto.