I'm trying to get my head around how a multi lane x4 SAS connector works in a DAS system such as the Dell MD1000. I understand that each lane is 3Gbps, and that multiple hard drives can share a lane. What I'm having trouble finding information on is how are drives assigned to a lane, and how does that change when you daisy chain two additional md1000's to another md1000?
That's 15 disks per shelf and a total of 45 disks in a three shelf configuration. This would be in a single path configuration BTW, meaning one x4 SAS cable.
Edit 1:
All, first thanks for all the help, but I think you're all heading down a slightly different path than what I'm asking. I get the whole thorughput saturation, that wasn't my question. I know what the theoretical max is simply based on the fact that the server has a single 12g conection per array of shelves. Meaning Server >>>>12g>>>>MD10001>>>>>MD10002>>>>>MD10003 is going to be 12g as it all depends on the servers single connection and each md1000 is daisy chained with a 12Gbps link.
My question is simply if there are 15 drives per shelf and 3 shelves, how do i know which drives go on which lane of a given 4x connector? While it's likely not to matter in the end, i was merely curious.
Also FYI, the enclosure is SAS, but the drives are SATA.
You'll be massively oversubscribed. A single 4-lane SAS link at 3Gbps == 12Gbps total throughput. There's an expander on each MD1000 enclosure, so your 45 (SAS or SATA) disks will easily saturate that link. That's a theoretical max throughput of 1.5 Gigabytes/second over that connection - 12 Gigabits/sec == 1.5 Gigabytes/sec.
I'm going to argue that while the theoretical sequential max throughput of all your disks is greater than the SAS chain can handle, with a backup server you may very well never reach that limit or come close to it.
Let's look at some limits in your system.
Theoretical max throughput of a 7.2k SAS drive ~ 1.2Gbit/sec (150 megabytes per second) Theoretical max throughput of 45 of your SAS drives= (45*1200) = 54Gbit/sec Theoretical max throughput of your SAS chain = 12Gbit/sec
So we're down to 12Gbit/sec so far.
How is your server connected to the DAS? 3Gb SAS? Ok, you've got 12Gbit again.
Your application is a backup server. Does it really have 12Gbit/sec connectivity to all of its backup clients? If so, can each client saturate the backup network (reading from their own disks) to the point where you would actually get 12Gbit/sec coming into the backup server? Probably not. That is a LOT of throughput! Your network would have to support that traffic. The backup server would have to have enough CPU to process all that traffic. Etc. etc.
My point is simply, if you have a couple of 1Gb NICs on this box and are using it for backup, you very well may never need to worry about the bandwdith of the SAS chain, because you'll never hit that limit before you max out your network or the throughput capabilities of your backup clients.
That said, if I could design the system myself, I would give you more SAS bandwidth, but my take away here is it may not be a problem in the real world at all.
All,
So here is the answer, the SAS back plane is basically just a huge bus architecture. It works very much like a hub (not a switch). The md1200's to some extent have no bearing on this. From a generic raid controllers perspective it just see's 45 drives. The lanes aren't being broken apart or anything like that. At least not between the chassis.
SAS, by in large is just a big single cable, the easiest way to think about it is Christmas lights. The cable is the path, and the lights are the HDD's.