Is it possible to setup a server with a host-based adapter (HBA) directly to a Dell/EMC CX3-40f disk-array enclosure (DAE) which only has fiber connections, taking the storage-processor out of the equation? Would something like Openfiler have this capability? Can it be done without Openfiler?
Note sure how the drives could be provisioned for RAID but even having direct access to the drives could be useful for OpenStack or similar.
For example, have a host with two dual-port fiber-channel (FC) cards, connect to four-SPE's therefore giving that host access to 60 drives (15/SPE).
You can connect directly to a jbod ("just a bunch of disks") array. Whether you can connect directly to the disk shelves in an array with SPs (service processors) but without using the SPs probably depends on which model from which vendor. Even if you can do so, you may need first to undo any proprietary formatting of the disks. You talk about SPEs so is it a Clariion?
So if you have not bought the hardware, buy a jbod. It will be far cheaper and allow direct attachment (or via a switch). But if you already have an enterprise class array with SPs then use them and enjoy technical support from the vendor, as well as the software and firmware benefits. You can present each disk as a separate lun if you wish, and disable read or write cacheing or whatever else it is that you are trying to avoid.
Aside from the already mentioned exporting of singe-disk LUNs, I doubt any storage array will work at all if the logic board is broken. After all, beside the obvious RAID/Storage Management functionality, the main purpose of this is as kind of SATA/SAS controller that makes the disks available via FC. If this is broken, you can't connect to the disks.
I can't seem to find any documentation to support this, but I remember once talking to an engineer who said that all modular storage shelves that would fit into some sort of storage controller have things built into them to prevent them being used on anything but the storage they're sold with.