Prefacing this by saying I'm not a storage expert.
I have a storage array that I want to connect to a physical linux host running Ubuntu 18.04.06. The server has a 4-port SAS HBA, and the storage array is a dual-controller unit, each with 4 SAS ports per controller. The storage array is a Dell Powervault ME4024, and the server is a Dell Poweredge R640.
Here's a diagram for a quick explanation of how everything is connected:
The reason for the dual connection is because I was advised to do it like this since storage errors are very difficult to recover from, and this way accounts for things like one cable becoming disconnected for whatever reason, or a controller failure or something like that.
The array is small right now, but has a lot of empty drive bays I want to use to expand the storage later. For right now, there are 2 900GB drives installed in a RAID1 disk group:
The server appears to be seeing the storage from each connection twice though, as SDC and SDD:
$ lsblk -I 8
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4.4T 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 4.4T 0 part
├─ubuntu--vg-root 253:0 0 4.4T 0 lvm /
└─ubuntu--vg-swap_1 253:1 0 976M 0 lvm [SWAP]
sdb 8:16 0 223.5G 0 disk
sdc 8:32 0 837.3G 0 disk
sdd 8:48 0 837.3G 0 disk
The storage on SDC/SDD has not been formatted or anything yet.
How do I configure it so that the server knows SDC and SDD are redundant connections to the same RAID1 block of storage on the array? Does it have something to do with the way I configured it on the Powervault?
You need to configure multipathing for SDC/SDD drives and after that create LVM on top of multipath device. There are lot of guides how to do it, here is an example from Ubuntu site - https://ubuntu.com/server/docs/introduction-to-device-mapper-multipathing
Please keep in mind, that your setup still has single point of failure – SAS HBA on Linux server.