I'm unsure about the differences in these storage interfaces. My Dell servers all have SAS RAID controllers in them and they seem to be cross-compatible to an extent.
The Ultra-320 SCSI RAID controllers in my old servers were simple enough: One type of interface (SCA) with special drives with special controllers, humming at 10-15K RPM. But these SAS/SATA drives seem like the drives I have in my desktop, only more expensive. Also my old SCSI controllers have their own battery backup and DDR buffer - neither of these things are present on the SAS controllers. What's up with that?
"Enterprise" SATA drives are compatible with my SAS RAID controller, but I'd like to know what advantage SAS drives have over SATA drives as they seem to have similar specs (but one is a lot cheaper).
Also, how do SSDs fit into this? I remember when RAID controllers required HDDs to spin at the same rate (as if the controller card supplanted the controller in the drive) - so how does that work out now?
And what's the deal with Near-line SATA?
I apologise about the rambling tone in this message, it's 5am and I haven't slept much.
This has been covered here... See the related links on the right pane of this question.
Right now, the market conditions are such that you should try to use SAS disks everywhere you can.
Also see:
SAS or SATA for 3 TB drives?
How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?
"Near-line" is a marketing term for "7.2K RPM drives not designed for 24/7/365 continual usage". Using them in such a role will result in an increased failure rate compared to drives designed to be used flat out for years at a time.
SAS vs SATA, in many cases there are little meaningful differences between the two bus specs, but SAS was designed for massive scale and sophisticated signaling where SATA was not. If all you're looking for is a pile of disks, the difference probably won't matter. There are different on disk cache-handling protocols though, which can cause SAS to yield some single percentage-point increases in efficiency when used at high utilizations.
That said, the market seems to have settled on "7.2K RPM is SATA, 10K and 15K RPM are SAS" as another differentiator. There is no reason not to have 15K RPM SATA drives, but no one makes them.
The controllers that drive the SAS and SATA connections are as varied as the SCSI RAID of old. Some have rather complex cache and battery backups (or flash-backed cache with a high capacity capacitor to commit the cash to flash when power drops). Some are just SAS/SATA connections onna card and don't bother with any kind of caching.
SSD's talk over SAS, SATA, or even something completely different like an PCIe card. RAID cards are variously able to handle TRIM, this capacity is still evolving. However, the raw throughputs SSDs are able to pump through can rapidly overrun the RAID card's ability to keep up; when that happens the RAID card itself becomes the biggest bottleneck to performance. The PCIe cards are the fastest SSDs around and present to the OS like a HBA.
RAID systems are beginning to handle things like storage tiering, features only really available in high end SAN arrays. Get a pile of 7.2K/10K disks and a few SSDs, and the RAID card will move the most frequently accessed blocks to the SSDs.
The other guys have answered very well but I have a pet-subject that I like to roll out whenever this kind of thing raises its head - what's known as 'Duty Cycle'.
'Duty Cycle' is the workload that the disk manufacturer anticipates the disk will use and it designed to work most reliably at.
For instance many 'enterprise' disks have a 100% 'duty cycle' - meaning they were designed to be utilised reading and writing every second of every day for their expected lifetime. This means a lot and subsequently costs a lot. Many other disks, especially cheaper consumer 7.2krpm SATA/ATA/FATA disks may have a 'duty cycle' as low as 30%, meaning that they were designed to only be utilised heavily for say 30% of the day on average. That doesn't mean that they can't be utilised for longer than this value but it starts to push the disks harder than their design specification and this impacts their MTBF/MTFF - i.e. they break more quickly.
I've seen this, painfully, myself a number of times. Once we had a particular SAN array with a couple of hundred 1TB FATA drives, an internal array re-organisation process kept the disks busy for days at a time, meaning a very significant jump in the number of disks dying - ironically these dead disks restarted the re-org process, causing a failure loop.
Basically if you expect your server to be busy '24/365' and don't like replacing failed disks don't use anything but 100% 'duty cycle' disks ok. That said lower DC disks are great for thing like overnight disk-based backups where they're only busy for 8 hours or so.