What are the technical and practical size limitations of raw disk files to be used with KVM? Can I create a 2 or 3TB raw disk file as a data drive for a KVM based windows virtual machine (acutally using Proxmox as the hypervisor) without problems?
AudioDan's questions
I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved.
While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.
I have been rereading some sections of "Active Directory Bible" by Curt Simmons in preparation for some machine replacement and changes to our windows 2000 active directory infrastructure. It seems that in any relaible active directory network you should have at least two domain controllers so that logins and securities can be processed if one of them is down. However it is stated in this book that Logins require a GC. It is also stated that in a multi-domain controller network, the infrastructure role and the GC role should not be on the same machine, unless all of the domain controllers are GCs. He then says that you would never want to implement an active directory network with all machines as GCs. To quote the book - "However, unless you have a lot of excessive bandwidth you would like to eat up, you should certainly never implement such a solution."
So if you have a two domain controller network and the GC goes down, logon attempts will not work - in which case there is actually no redundancy. So would it really be that bad to have both DCs as GCs in a small (<35) machines network on a gigabit switch? It seems for all of the multiple domain controller redundancy that microsoft claims, there are a lot of single machine roles that can bring the whole thing crashing down in a machine failure. Am I wrong here?
I have a pair of Windows 2000 domain controllers. The machine that currently hosts GC is getting tired and is pretty old as far as hardware. I want to replace it with a newer machine I have lying around. Ideally I want to keep the same name and IP address though that is not neccessarily critical. Furthermore I have a license for 2003 server that is not in use so the new machine will run 2003 server. Any advice on the basic step by step?
Does anybody have experience with both Linux and Windows failover redundancy clusters, and if so which do you prefer for file server and/or web server?
For a little background. We set up and adminstered a Microsoft cluster for several years under Windows 2000. The cluster was a pair of web servers with a fairly large (for the time) RAID array for multimedia storage being served to the web - 100s of thousands of mp3 commercials being served to radio stations. We had a number of things we did not like about this scenario. First, the Microsoft cluster used a shared storage array. Even though it was hot swap RAIDed, all it took was a slight corruption to the NTFS file system on the drive and all of a sudden you were down for several hours while ChkDsk ran.
So on the next build we bought into a product called NeverFail - http://www.neverfailgroup.com/ This product replicates the data between the primary and secondary server automatically keeping it synchronized at the block writing level. This has eliminated the problems we had with shared storage. But it has introduced its own issues. Any restart requires a data resync where the system analyzes everything for synchronization. While the system is up and available during this sync, on a server with less than a terrabyte of mp3 files, this takes several hours. And a typical Microsoft patch session requires a couple of these resyncs. So it often takes us upwards of 2 days to patch the 2 machines. As a result we find ourselves putting off patching and not doing it as frequently as we should which is not ideal. And the process is touchy and has to be followed specifically.
So we are considering moving the main site with all of this content to a pair of LAMP boxes with Linux HA and DRBD.
So I am curious if anybody has experience administering both Linux and Windows clusters who might tell me what they experienced. Specifically we are wondering about resync time on restarts, etc, and overall experience administering such a Linux system.
While we have trditionally been a windows shop, we now have a guy who knows Linux in house and I am learning as well now and have added a number of Linux boxes to our system, so we are open to that from an Administering point of view.