I have a 16-port Areca ARC-1260D PCIe RAID card and I was planning to use it as one large array (either RAID 10 or RAID 6). I didn't think about if it was possible to split up the RAID card into two (or more) different RAID arrays. The card in question supports multiple RAID selection.
Is there a large performance drop off (or any other caveats) in using multiple RAID selections off a single adapter? Typically, I've used RAID adapters for one single array at a time so I'm not sure if it's wise/unwise to use multiple RAID sets on a single adapter. Initially I was planning to use the entire array for VMs for XenServer, but now with this option, I'm thinking of making one array for the VMs, another for simple file storage.
Edit: This is for SATA, not SAS. Initially I was looking to fill the array with 1.5TB SATA disks, but the price for 2TB disks has fallen dramatically and I'm thinking of having two RAID arrays on the card; 1 array with 6-8 1.5TB disks, the other with 6-8 2TB disks.
There's a few reasons you might see a minor performance drop by having multiple RAID arrays - firstly you're likely to be splitting the cache to some degree but that shouldn't be particularly impacting. You're also more likely to introduce queue/bus contention but again that shouldn't make much of a difference. What will have a bigger impact is that you'll have less disks in the actual array which is likely to have a much bigger impact overall.
You're not going to see any difference with RAID-10. With RAID-5 and RAID-6 you might theoretically see an indiscernibly small difference in write throughput when writing heavily to both LUNs at the same time, depending on Areca's implementation. What you're concerned with is essentially premature optimization, and you should probably be focusing on these general rules instead before you turn to vendor-specific implementation quirks:
I've used a number of older 11xx and 12xx series Areca controllers and currently manage a server with a 1880 series controller.
The way Volume management works with Areca cards is similar to how LVM works on Linux. First you assign drives to RAID Sets which are similar to Volume Groups. Then you create Volumes on the RAID Set and it's at the Volume level that you specify the RAID level. Hot Spares can be Global or tied to a particular RAID Set. But you can mix RAID levels on a single RAID set if you wish. They also support online raid level and volume migrations.
The performance impact of having multiple volumes on one RAID SET is going to depend on your workload and a number of other factors like system ram, controller cache etc. But keep in mind that most of the Areca controllers use standard memory modules for cache so they are field upgradeable. I just checked and it looks like the 1260 uses so-DIMMS but I didn't look to see what the largest size so-dimm it can accept.