We are currently looking into buying a SAN, and have some concerns over the speed of disks we are going to put into the array. We would normally have gone for 15k disks over 10k every time, but one of the vendors has told us we wont notice much difference between the 2, and the price difference is quite different.
We are looking at either a HP 2040 or 4400 san, with SAS disks. Will we not see much difference between the disks, and if not why?
You generally won't notice the difference between modern 2.5" 10k and 15k enterprise SAS disks on something like an HP MSA 2040 storage array... You'll run into other platform limitations before that becomes an issue. When people are concerned about that latency difference, it almost makes more sense to pursue SSDs (which are supported in the 2040 unit).
With all storage, this comes down to your anticipated access patterns (read-biased/write-biased/mixed?), application performance requirements (database/application/virtualization?), transport (fibre/SAS/iSCSI), and array composition (RAID level, # of disks).
Can you provide more detail on what you plan on doing with the array? I'll be able to clarify this answer.
Storage performance can get a bit complex, but the underlying statement is true. 10k or 15k have similar performance, for the most part. If you put them side by side doing the type of work that disks are worst at (random small-block IO), you will measure a difference, but the reality is that with most storage controllers, that's pretty rare.
These days with most storage being able to put hot spots onto their own tier, the need for a heavy 15k tier is greatly reduced, because most of the IO read-intensive work can be put on a higher tier of SSD. In my environment, the only place we really need them is for enormous databases that stand still 99% of the time, but need screaming performance for quarterly and yearly reports that will touch almost all the data.
"Performance" includes two main metrics for disk access: Bandwidth and IOps.
The rotational speed of the platter disks affects IOps primarily. Faster disks => more IOps.
I have cheap older 5400 RPM disks in a home server than can pull 85MBps. A Seagate 300GB Cheetah 15K.7 (very modern disk) is specified at only 125MBps, not that much faster, but 10x the price.
But my drives get very poor IOps, like barely into the double digits. You need IOps if you're doing a lot of little reads/writes all over the place. The Cheetah drive gets 500 IOps (on average). So when writing a ton of tiny files or many small DB updates, the Cheetah will be about 50x faster.
Currently there are 5400, 7200, 10k, and 15k disks commonly available. Which you need depends on what you'll be doing with them. For archival storage, slow disks are cheap and still get good bandwidth. For OLTP you'd want the highest IOps money can buy. Most people fall somewhere in the middle.
That is totally true if that is not a low end SAN. Caching - especially when some larger SSD buffer is insovled - can factually kill those differences. For example I am now regularly coping files with 600mb - 900mb / second. On a Raid 6 of 5400 RPM discs. Latencies are regularly in low single digit despite heavy random and write heavy workloads. The trick? A 20% SSD write back buffer.
So, on a "proper" san with some heavy buffering you may not see that many differences. In fact, I would say you waste a ton of money.