I just recently bought a new server an HP DL380 G6. I replaced the stock smart array p410 controller with an LSI 9211-8i.
My plan is use ZFS as the underlying storage for XEN which will run on the same baremetal.
I have been told that you can use SATA disks with the smart array controllers but because consumer drives lack TLER, CCTL and ERC its not recommended. Is this the case?
I was wondering if using the LSI controller in JBOD (RAID passthrough mode) does the kind of disks I use really have as much of an impact as they would with the smart array controller?
I am aware that trying to use a RAID system not backed by a write cache for virtualization is not good for performance. But I was conisdering adding an SSD for ZFS. Would that make any difference?
I reason I am so obsessed with using ZFS is for dedup and compression. I don't think the smart array controller can do any of those features.
Please don't do this.
If you're going to run ZFS on Linux, do it bare metal without a virtualization layer. All-on-one virtualization and ZFS solutions are cute, but it's not worth the effort in production.
As far as drives are concerned, you can use SATA disks on an HP Smart Array controller as well as the LSI 9211-8i controller. In a ZFS configuration, a failure of the SATA disks may have an adverse effect on the system when running with the LSI controller.
Using consumer disks is just what it is. Go into it knowing the caveats.
Edit:
So you're looking to run a ZFS filesystem to provide storage for local virtual machines?
The HP Smart Array P410 is a good RAID controller. Most importantly, yours likely has a battery-backed or flash-backed write cache. That's important for performance purposes. Achieving the same thing properly on ZFS (using the ZIL) is far more costly and requires more engineering thought. ZFS may not offer you much over a traditional filesystem like XFS for this particular purpose.
This would be different if you were using ZFS on a dedicated server to provide storage to other hypervisors.
See: ZFS best practices with hardware RAID
Using Consumer grade disks in server grade HW is possible though not recomended if you are going to use the support from the vendor. They will bitch like hell why you replaced the perfectly supported drives with unsuported such. Aside from that there is no problem to do it and backblaze proved it (http://www.getoto.net/noise/2013/11/12/how-long-do-disk-drives-last/).
As for the drive selection Look for drives that support NCQ and you should be mostly fine.
Using the drives in JBOD mode is asking for trouble. Quite possibly the LSI controller will show you just one big disk (and you do not want that). What you need is passtrough mode (basically use the controller as extender for the port count. Check if this is the case.
ZFS on linux: not a stelar idea. It is still not stable enough though it is usable. Dedup on zfs: Quite a big no if you are planning to run serious load on the machine. It tends to eat lots of ram (on the range of 2-4 G for every 200-500 GB of deduped data). It might have improved but haven't checked soon. Compression might be a good fit though it depends on the data.
SSD: Yes it will make quite a nice difference. There are several areas (ZIL already was mentioned above) that will improve quite a lot if placed on a separate disk (and if on SSD even more).
If you are adamant on the ZFS i would suggest using either solaris/nexenta/opensolaris or BSD for the storage host and then export it to the XEN hosts over iscsi/ata-over-eternet/etc.
I strongly suggest to at least skim over backblaze blog and look for the ideas they are using in the construction of their POD's