We have four (with plans to grow to four more) Dell servers with 6 NICs. They are running VMware ESXi 4.1. We would like to connect all of them to an Openfiler iSCSI SAN via HP ProCurve 1810G switches. Based on the design below, is there anything I should be concerned about or anything unusual that I should look out for when making the iSCSI network configurations on the servers, switches and OpenFiler? Should I bond the connections on the servers or simply setup them up for failover? The primary goal is to maximize IOPS. Thanks in advance.
I don't think bonding will help you at all in this scenario, pretty certain in fact. You want multipathing to do its thing on the ESX 4.1 Hosts - you will be limited with multipathing policy options as I don't think there is a full Openfiler vSphere PSP or Multipathing Extension Module available but round robin should work and distribute load across all available paths to some degree, it should certainly deliver better performance than plain failover would.
VMware do not recommend NIC bonding at all for iSCSI on vSphere 4.1 - their recommendations now clearly state in the iSCSI SAN configuration Guide that you should create multiple independent iSCSI VMkernel ports, with one physical NIC and only one mapped to each iSCSI port (do not use teaming at all, not even for failover) and leave iSCSI failover to the multipathing stack.
Other than that your proposed layout looks good - make sure you have nothing else on those Switches if at all possible, VLAN the iSCSI network if you can't, enable hardware Flow control and use Jumbo Frames if you can. I'm not sure I'd be confident in Openfiler on commodity hardware for this but if you are then that's your call, hosting more than a handful of VM's off a single point of failure like that wouldn't allow me to sleep all that easily at night (or party hard when on holiday :) ).