My rule of thumb for a physical server was fast random access disk (VelociRaptor/SSD) for OS, and large disks for data (e.g. WD Caviar).
How does that look for a Hyper-V that should run two virtual machines (File Server+Intranet, Dynamics CRM)?
- Does it still make sense to put the physical OS on a separate disk?
- How much disk space / RAM should I set apart of the physical OS?
- File Server: Is there a notable difference of a pass-through disk vs. VHD? Any preference for one or the other regarding backup, Volume Shadow Copy Service, other stuff?
- Should I split the virtual OS parts (FileServer-OS, FileServer-Data, CRM) onto separate physical disks? Say, with mirror 2x2x1TB, or 2x2TB?
- How do you backup a 'life' VHD? "as usual" from within the server?
I've read the related questions and the system requirements stated by microsft, I am more looking for practical input, from people who've done it before.
[edit] The specs are still open, I am aiming at an i7-920 quad core, board e.g. Gigabyte EX58-UD5 (open to suggestions) 8GB RAM
I am aiming at a total disk storage of about 2TB.
Idea 1: 80GB SSD for Hyper-V, 2 x 2TB WD RE4-GP in mirror for the two VM's, totals at about €850
Idea 2: 4x1TB WD RE-GP in 2 mirrors, resulting in 2x1TB storage, one pair for HyperV and first machine, the other for the second one. Totals at €520, would allow another 4GB of RAM that might make a huge difference.
[edit] A commenter asked for the final configuration, here's what I learnt (and what we did)
I decided against a hardware raid, due to bad experiences with various controllers, the low overhead of a software mirror, and the simplicity of transfer to another machine.
We put the most busy network share on pass through disks. They are "offline" in the HV host, and mirrored in the virtual machine. Performance is adequate for our purposes.
I did add a separate OS disk, simply to be more flexible about the configuraiton. (WD Raptor 300GB).
So we have configured one pair of 1TB as pass-through, the other pair is mirrored in the HV host and holds the VHD's for both servers.
Note that passthrough disks disable snapshots in Hyper-V-Console (I wish there was an option to just exclude them but proceed with the snapshot). I also learnt the hard way that snapshots were a bad idea anyway since it breaks active directory sync.
Backup is to an external disk attached to the host through e-sata.
If you don't plan on using the host OS to run anything but Hyper-V, I don't think putting VMs on the same partition as the OS is going to matter much. I'm using Hyper-V on a couple workstations with 10k rpm disks with the OS on one and VMs on both and I don't notice a difference in VM performance between them.
You can eat up disk very rapidly with VMs, so its worth having a big & slower disk for archives & backups (maybe not necessary if you have good network storage and a fast network).
If you are building it yourself and want to stay within a reasonable budget, I'd suggest 4-6x 10 rpm disks in raid 10 (300 GB disks can be had for ~$200 each on NewEgg). Then maybe 2x 1-2TB disks in raid 1 (if you add this, you might as well put the OS on it).
Using dynamically expanding disks and snapshots both adversely affect performance (for virtualizing a workstation it's fine, for a server maybe not). And for any disk intensive service, I'd use direct access to the service's backing store (e.g., database or file-server). If you move the I/O bottleneck off the virtual OS partition, you can probably snapshot the virtual server OS without worrying about performance.
Finally - you may want more than 8GB (Hyper-V can't share unallocated RAM, and the host needs some, too) - but that depends on how intensely they will be used.
I hope this is useful. And if you do some experimentation and benchmarking, I think many people would be interested to see the results. As you've probably noticed, there is a paucity of performance data in this area.
In a virtual Machine, the disk is the biggest bottleneck. When I build a VMHost, I use a 1TB drive with a 60 GB OS partition and use the rest to backup the VMs to. I then use 4 or 6 velociraptors in a raid 5 or 10. That gives them the speed that they WILL need as well as some redundancy.
Using raid 1 with 2 slow 2TB drives is just going to be a headache in the future. Again, the disk is the biggest bottleneck.
PS with the cost and overhead that Server 2008 brings, I have always used Server 2003 with Virtual Server 2005 and it has worked great.
Putting the OS on a separate physical disk(s) is definitely useful if you're running Hyper-V, because that actually runs on top of Windows, so the OS actually has some overhead (as opposed to, say, ESX/i, which has a really small footprint); a dedicated disk (or array) for OS and pagefile can really help.
Regarding VMs: what kind of workload will they have? Memory? CPU? Disk? If they work a lot with storage, then putting them on separate physical disk(s) will provide a real advantage; if they do very low disk I/O, you can put them all in the same place and there won't be any difference.
If you have two very disk-intensive VMs to run, I'd go with three RAID1 arrays, one (small) for the OS and pagefile and one (large enough) for each VM.
•Does it still make sense to put the physical OS on a separate disk? Yes, this is still a good practice.
•How much disk space / RAM should I set apart of the physical OS? For disk space, it depends on what else you are going to put on the parent partition. For RAM, I go with 2 GB for parent pluse 64 MB per VM that will be hosted.
•File Server: Is there a notable difference of a pass-through disk vs. VHD? Any preference for one or the other regarding backup, Volume Shadow Copy Service, other stuff?
You will generally see a barely notable difference between a pass through disk vs a VHD. However, the manageability of pass throughs can get very tricky. Depends what you care about. If you want to get all the performance you can, go with pass through. If you want to make it as easy as possible to manage, go with VHD's.
•Should I split the virtual OS parts (FileServer-OS, FileServer-Data, CRM) onto separate physical disks? Say, with mirror 2x2x1TB, or 2x2TB?
I have split VM OS and Data drives and put them on different VHD's. Generally the best practices from the physical world map to the virtual world. Putting the VHD's on different physical spindles would totally depend on the workload of the file server, and budget. Chances are I wouldn't worry too much about it.
•How do you backup a 'life' VHD? "as usual" from within the server?
Unless you have a SAN on the backend, you will typically run your backup software on your VM's just as you would a physical machine.
No. I use a Raid 10 on my main server for OS and hyper-v data.
I use standard 64gb. Empty, though.
No. Not notable. Even expanding are not notable. Measurable, yes. I use pass through for main data stuff - mostly because it guarantees me IOPS budgets. Because the disc is single onwned.
Whow, you talk REALLY low end server here. Ok, I DO split larger servers into different vhd files - because my standard vhd is 64gb (sysprepped, using a differntial to boot the real OS from). Large spaces are separate VHD's.
Just as info: Server currently has 64gb. 6x300gb Velociraptors for boot + vhs, 6x300gb Velociraptors Raid 10 for SQL data. Adding another 4 discs in January. The case has 24 slots - I think I need a bigger one soon. I can boot about 50gb full of VM's without problems, but when patch day starts I feel the IO load. As well as wehn doing database imports. But then, I need some power here.
2TB discs are SLOW. This means SLOW. Like REALLY slow. Best bang or the buck right now are WD Velociraptors, with a good RAID controller (Adaptec)
Or external. Both ways work. In server is more flexible for restore. Some stuff I do not backup on a per server basis.