I am trying to find the best way to migrate my VHD's to my DataCenter offsite for DR purposes.
Currently I have a number of Xen Stacks setup as the following:
- Dual RAID ISCSI SAN multipathed to at least two Xen Servers
Below is an example diagram:
One option I have been testing and researching is:
- Install CentOS onto the SAN's
- Installing Gluster onto the SAN's
- Using UCARP to create a VIP to use to attach the NFS storage to Xen
- Using Gluster's geo-replication functionality to replicate this data to another Gluster node stored offsite.
- Placing all the Xen Servers into a single stack in the office to give me alot more RAM to play with.
I have managed to get this setup and it is working great, however I am very concerned about Disk I/O. Testing a disk created on a NFS share in Gluster I seem to be getting 10MB/S write and about 5MB/S read, which seems quite slow. It is quite important to have high disk I/O as most of the VM's will be running SQL servers.
One important factor is that my Dev equipment only has slow disks (7k) so I am aware this might be effecting the speed a little bit.
What better methods might there be in achieving a redundant storage system for my VHD's?
UPDATE
Well, I migrated my gluster cluster onto some decent hardware with some decent disks and sequential write has increased to 100MB/S and read to 80 MB/S, I seem to loose about 30-40MB/S than running direct from iSCSI storage but I guess this is to be expected with NFS.
0 Answers