Is there currently a mechanism for constructing a hardware RAID array using storage devices that have a full-size PCIe slot, PCIe M.2, or PCIe U.2 bus interface, such as SSDs?
It is extremely unclear to me how this might work, since a normal hardware RAID has the drives directly plugged into the RAID controller via SATA or SAS interfaces, but meanwhile these new kinds of storage devices plug straight into the system PCIe bus without a separate drive/device controller.
I can foresee two possible hardware implementations:
- Pure PCIe x16 or x8 RAID controller with no onboard drive interface connections - communicates with the separate storage devices via the standard system PCIe main bus. Data transfer speeds may be limited due to a lack of available PCIe channels between the RAID controller and each individual PCIe storage device. Also the RAID controller lacks exclusive access to the member drives, which seems like it could be a data integrity/security problem.
- RAID controller with it's own secondary PCIe bus on the card, with a special PCIe bus interface cable extending out to an external PCIe slot cage that may resemble a traditional SCSI/SAS hot plug backplane - the PCIe storage device cards plug into this isolated PCIe bus, can ONLY communicate with the RAID controller, and have no direct path to the system CPU or memory.
(As of writing this, searching Server Fault for "U.2 RAID" or "M.2 RAID" turns up nothing, and I'm creating the tags "U.2" and "M.2". Is no one already doing this?)
I am now trying to accomplish what you are proposing on an aging Dell. I also wondered how it's possible, although I read somewhere that newer mainboards are designed to boot from PCIe pairs configured for RAID. I have not confirmed this.
My solution, though not perfect, was to mount my M.2 drives in enclosures that have the 2.5" SSD form factor and U.2 ports, and then connect them to a high performance RAID card via SAS. I haven't delved into the transfer rates, but I'm presuming that they are far superior to regular SSDs configured for RAID and connected via SATA.
I'm not out of the woods yet, however. I'm now looking for cables to connect all of this. But at least you might consider this avenue if you haven't already found another solution that works.
(Responding to my own question, months later. I expect this is going to be an evolving technology and more products will become available over time.)
As of Feb 2020, there are now two search results for "U.2 RAID":
This is a bootable PCIe 3.0 16x RAID controller which takes the route of having its own dedicated onboard PCIe 3.0 x4 subchannels on four internal connectors, for up to four U.2 devices. It supports 0, 1, 0/1, JBOD.
They make a cable, part # 8643-8639-50, which is a "SFF-8643 to U.2 SFF-8639 connector with 15-pin SATA Power Connector Length: 19" (50CM)"
This cable allows for mounting of the U.2 drives in a standard internal 2.5/3.5 inch drive bay.
This bootable U.2 RAID controller has similar capabilities to the SSD7120, with four dedicated onboard PCIe 3.0 x4 subchannels, but instead combines them into two external 8-lane SFF-8644 connectors on the back of the card. It includes:
For more distance between the external enclosure and the controller, there is an optional 2 meter 8-lane cable, part # 8644-8644-220
(It will be interesting to see how long it takes for RAID 5/6 to become available for enterprise U.2 device arrays. The throughput of individual drives is already ridiculous .. is there a parity processor that can keep up, without itself being a throughput bottleneck?)