I have bought a new server where I use the motherboard ASUS P9X79 WS X79 S-2011 ATX. It will be used for virtualization, preferably using VMware vSphere Hypervisor™ (ESXi) if I can get the RAID on my motherboard working with VMWare (it does not detect it).
The motherboard has the Intel® X79 chipset, which for RAID controller means vendor ID 8086 (Intel) and model ID 2826.
When I boot the ESXi 5.0.0 installation media from my flash drive I can not see drives in the RAID5 set I created.
Questions:
Is there a VIB file for the RAID controller I can use?
I have found one article at http://www.intel.com/support/motherboards/server/sb/CS-033313.htm on getting RAID to work with some Intel controllers, it lists 9 integrated RAID modules it is comptabile with. However, there is no mention of the X79 chipset.
The RAID on that motherboard is not real hardware RAID, it's "fakeraid" that depends on drivers in the operating system. ESXi doesn't support fakeraid, because it's aimed at enterprise environments (which use real hardware RAID for better performance), not consumer PCs (which use fakeraid because it's cheap). ESXi should recognize drives connected to that controller, but only as standalone drives, not as a RAID array.
You might want to opt for a different virtualization platform, such as Citrix XenServer or Linux KVM. If you really want to use ESXi, you could set up your three drives as separate datastores, give each of your VMs three virtual disks — one from each datastore — and set up software RAID within each VM's operating system.
(The VMs will not be able to see the host's fakeraid controller. Isolating guests from the host's hardware is half the point of virtualization. Guests will only see their virtual disks.)
There are unofficial drivers that you can install on ESXi to support additional hardware, including a "dmraid" one for Intel Matrix RAID (your chipset's fakeraid), but you're going out on a limb if you do that.
I think round here you will get many people suggesting an extra drive and RAID 10 rather than RAID 5 as the write performance is much higher under load. I personally would say if the drives aren't going to see high IOPS (especially random IOPS) then RAID 5 would do.
However I've noticed you have chosen WD Green SATA drives - I would consider these a very poor choice. Green drives normally have a slower speed (rpm) and are set to spin down during periods of inactivity, regardless of whether the OS has told it to. Really you want 15K SAS drives and a battery backed caching RAID controller.
If you read the intel link you provided, you will see from page 41 onwards the guide describes setting up a 2k8 server, installing RAID Web Console 2 on the server, then setting up the ESXi host. This should be able to let you know if a disk has failed via a popup and email.
ESXi on a thumbstick - the thumbstick will probably be slower than booting from a RAID, and there is no redundancy for the ESXi host itself, just the VM's. However, if you had a problem with your ESXi install or thumbstick, it doesn't take long to set up a new one then import the VM's. On a production server, I would only consider this if like yours, the USB port is internal.
OK, the previous answers have talked about why it's a bad idea to use fakeraid, why VMware doesn't support it, and to some extent why using a desktop motherboard isn't such a good idea.
If you're really set on using this board as the base for a VMware server, this is what you do:
DELL PERC 6/i
. Buy one, preferably one that includes a battery. This is an actual hardware RAID controller which is compatible with VMware vSphere. Do not get a 6/iR card.SFF-8484 SFF-8482
and buy one of the cables listed. This cable allows you to connect up to four SAS or SATA drives to the 6/i.Enjoy!
No, you can't use Intel ICHR raid for vSphere/ESXi. The reason for this is because the raid doesn't exist as a volume to be exposed through a controller driver (the real explanation for what all the kids call "fakeraid").
All RAID solutions are software-based, but what most call "hardware RAID" are solutions where the RAID software runs on the controller ("firmware") and so when you use a driver to allow your O/S (ESXi in this case) to see the volumes (non-RAID such as SATA, IDE, or Host Bus Adapters aka HBAs expose drives instead of volumes), then the O/S has a lot less work to do that with a more typical software RAID solution. ICHR is an interesting hybrid solution where the SouthBridge chipset actually does provide firmware for the RAID and a BIOS where you can do some basic configuration. It does not provide a proper INT19 bootstrap loader and that means that the RAID volumes it presents can't be booted off of and in effect, don't really exist until the IAStor service starts, uses the ICHR driver to see the volumes, THEN presents them to the O/S.
Windows can deal with this through its bootloader process and I imagine VMWare could as well but they never will because as others have pointed out, ICHR RAID isn't "pro grade" due to its lack of a dedicated parity processor (ICHR uses your x86 CPU which actually does an outstanding job when setup right) and the inherent dangers in that such as the fact that your CPU is a general purpose processor doing many other things making it FAR more prone to crashing than a dedicated parity processor. The lack of certain cache/buffers and proper battery-backup for uncommitted transactions also make ICHR high-risk compared to "hardware RAID" solutions.
In the end ICHR is a value-added solution for people who don't need 6 9's (99.9999%) of uptime and can risk downtime and minor data loss. If you want to play with a really interesting solution, get a community license for NexentaStor 3.x, install vSphere on any drive/array you can get your hands on, create a VM for NexentaStor and install it, learn how to do RDM (Raw Device Mapping) and expose your drives to your NexentaStor VM through RDM, then expose NFS of iSCSI from THAT VM to the same host and use that SAN solution for your other VMs and other systems on your network. That way you can take advantage of raw disk performance, ZFS (RAIDz and the likes), and learn all kinds of cool things about enterprise SAN use with vSphere and virtualization. It's a project but done right (throw a couple 60GB SSDs in strategically for cache/ZIL) you will learn a lot, and have extremely flexible, portable, and expandable storage thats hardware agnostic.
Unfortunately there is no VIB file for the X79 Chipset. If you are set on using software RAID I recommend a virtualization platform which runs upon a Linux distribution with better hardware support. For instance Virtualbox running on Ubuntu Server would work.