I'm proposing this to be a canonical question about enterprise-level Storage Area Networks.
What is a Storage Area Network (SAN), and how does it work?
How is it different from a Network Attached Storage (NAS)?
What are the use cases compared to direct-attached storage (DAS)?
In which way is it better or worse?
Why is it so expensive?
Should I (or my company) use one?
First of all, for a (broad) comparison of DAS, NAS and SAN storage see here.
There are some common misconceptions about the term "SAN", which means "Storage Area Network" and as such, strictly speaking, refers only to the communication infrastructure connecting storage devices (disk arrays, tape libraries, etc.) and storage users (servers). However, in common practice the term "SAN" is used to refer to two things:
A SAN can be composed of very different hardware, but can usually be broken down into various components:
A SAN provides many additional capabilities over direct-attached (or physically shared) storage:
Based on everything above, the benefits of using SANs are obvious; but what about the costs of buying one, and the complexity of managing one?
SANs are enterprise-grade hardware (although there can be a business case for small SANs even in small/medium companies); they are of course highly customizable, so can range from "a couple TBs with 1 Gbit iSCSI and somewhat high reliability" to "several hundred TBs with amazing speed, performance and reliability and full synchronous replication to a DR data center"; costs vary accordingly, but are generally higher (as in "total cost", as well as in "cost per gigabyte of space") than other solutions. There is no pricing standard, but it's not uncommon for even small SANs to have price tags in the tens-of-thousands (and even hundreds-of-thousands) dollars range.
Designing and implementing a SAN (even more so for a high-end one) requires specific skills, and this kind of job is usually done by highly-specialized people. Day-to-day operations, such as managing LUNs, are considerably easier, but in many companies storage management is anyway handled by a dedicated person or team.
Regardless of the above considerations, SANs are the storage solution of choice where high capacity, reliability and performance are required.
Do you need one? Depends. £ or $ per TB is considerably higher than DAS. Plus, the performace of DAS does, I'm afraid, out-perform FC/AL and iSCSI SAN (well, at least in my testing with Oracle and SQL Server DBs). But, with DAS, you don't get the benefits of being able to share storage (good for clustering and VMWare).
A number of storage vendors are migrating away from fibre-channel for the host-to-storage controller connections, in favour of iSCSI, which runs on top of Ethernet. It's the old Token-Ring vs Ethernet saga all over again; with so much industry-wide research and investment in Ethernet, FC just can't keep up. A 10Gbps Ethernet switch is far cheaper than an 8Gbps FC one, plus it can be vLANd or otherwise segmented to provide storage and non-storage data.
However, there are some big benefits of SANs:
If you're considering dipping your toe in the water of shared storage, look at products like HP's P4000 kit.