On our Linux servers, we currently use HPs qla2xxx drivers, because it has multipathing (active/passive) built in.
The are, however, various other options, like Red Hats device-mapper-multipath with the stock qla2xxx drivers (multibus and failover) and things like SecurePath and PowerPath (both of which can do trunking, iirc).
Can someone tell me what the merits and demerits of the various options are (if I can ask such a question), besides the obvious fact that the {Secure,Power}Path options cost vast amounts of money? I'm mainly interested in the freely available options, like HPs qla2xxx vs. Red Hats multipathd and possible other open source solutions, but I would like to hear good reasons to go for the commercial solutions too.
UPDATE: I'll be benchmarking various options the coming few days (the average of 10 runs of iozone for each option (options being native qla2xxx failver, native qla2xxx multibus, HP qla2xxx failover)). I'll post a summary of results here for those interested.
In the past I have used:
I vastly prefer device mapper multipath for its simplicity, tight coupling to the kernel and reliability.
The IBM SDD was originally an AIX driver ported to Linux. It worked well, but to avoid kernel taint required specific Linux kernel revisions, which were often 3-6 months lagged behind the latest and greatest.
I hate RDAC storage and trying to get multipathing software of any kind working with it. If you have RDAC storage, avoid the RDAC drivers and use dm-multipath. It's more reliable in my experience.
For HBA drivers, I typically stick with whatever comes with the Linux kernel, since it also works with dm-multipath. Some of the biggest frustrations in my career have been trying to get the RDAC or SDD drivers working with the HBA drivers. Often there's a mismatch somewhere, and half the LUNs aren't seen, or conflict and you see the same ones twice.
Another vote for DM multipath.
I've suffered at the hands of QLogic's own qla3xxx/qla4xxx drivers and the userland utilities that are meant to control them before. Our experience might be slightly different because the cards were OEM'ed by IBM as the only iSCSI HBAs available for their blades, but I suspect that it applies equally. The drivers and utilities were a nightmare to use. Additionally neither IBM or QLogic were able to provide technical direction for using the cards in their recommended environments.
In contrast, the upstream kernel drivers work with perfection. All the interfacing we require is presented through sysfs. LUNs from different paths arrive as block devices just as you'd expect, ready to be identified by multipathd. multipathd is relatively easy to configure and does exactly what it says on the tin in times of toil. If you have the technical expertise to do without any basic support they may provide you then this would be my recommendation.
PS: If you are looking to boot a root-on-multipath setup then it is a bit tricky but perfectly achievable. I can provide some notes if required.
Well so far DM for me too. I've tried both RDAC and DM on a DS4700, neither will do dynamic load balancing on a ds4700, just failover. If you enable round robin balancing, your throughput collapses...something I read somewhere blames the inability of the ds4700 to do this.
I've seen no performance differences between rdac and dm, although dm will cause the Sansurfer software to complain about non-preferred controllers being selected for some reason.
RDAC was also a nightmare to compile under Debian for me, I wish people would stop thinking that linux is only RHEL and SuSE!
What about SDD ? Any pros about it over these 2 ?