Not really familiar with Intel's "Mirrored Channel Mode" for a Blade Server setup (your typical moderately-heavy MySQL OLTP database running on the bare metal blade; no virtualization right now).
From the Intel docs I was able to find:
The Intel Xeon Processor 5500 series and Intel Xeon Processor 5600 series support channel mirroring to configure available channels of DDR3 DIMMs in the mirrored configuration. The mirrored configuration is a redundant image of the memory, and can continue to operate despite the presence of sporadic uncorrectable errors. Channel mirroring is a RAS feature in which two identical images of memory data are maintained, thus providing maximum redundancy.
On the Intel Xeon Processor 5500 series and Intel Xeon Processor 5600 series processors based Intel server boards, mirroring is achieved across channels. Active channels hold the primary image and the other channels hold the secondary image of the system memory. The integrated memory controller in the Intel Xeon Processor 5500 series and Intel Xeon Processor 5600 series processors alternates between both channels for read transactions. Write transactions are issued to both channels under normal circumstances.
However, I'm not really pickin' up what they're layin' down here. I lose half my storage capacity, but I gain "redundancy" of memory and possible gain read/write performance benefits? Like RAID 1 for RAM? Anybody have any practical experience with this configuration?
"RAID 1 for RAM" is an accurate description. In my experience, there isn't much performance benefit, but depending on the bus speed vs the speed of the modules, your mileage may vary.
As far as redundancy goes.. well, it's not terribly often that a module goes bad.
Personally, I turn off mirroring whenever I see it enabled.
Personally I would sooner use some form of clustering rather than that level of hardware resilience. It makes sense for cheapo components like disks to double up on them, but mirroring memory is a nice-to-have but not that useful. I mean what's more likely to fail; a CPU, your OS, your software, your mobo, your PSU/s. I'd sooner put the money towards clustering.
I have read that this kind of thing (you can do it with CPUs as well) is very useful in the huge supercomputer clusters.
Some of these clusters are running so many machines that there will be a machine failure every couple of hours. Faster than the jobs can complete. That really messes up the computation. Adding redundancy like this to each node can more than double the time between failure.
This memory mode was really designed for situations where you need high availability You shouldn't see much of a performance difference (since the loss of one channel probably isn't noticeable under normal operations) however you actually lose alot of ram. With mirroring enabled, only one-third of total memory is available for use because two DIMM slots are the primary channel, two DIMM slots are the backup channel, and two DIMM slots are not used. (at least that's how it is on IBMs)
I typically recommend that it be turned off (if you have an app or OS that likes ram - and let's face it: is there one that doesn't?) or save up to upgrade to the ex5 chipset from IBM (hp and others soon to follow with similar offerings) that adds a boatload more QPI.
There are the occasional "this server has to be up regardless of the number of shots fired at it" and this type of redundancy helps. Additionaly of you've purchased less than stellar quality ram this might save you from a blue screen or 2.