Why would a heavily disk intensive application run faster on a SAN than on a Physical Disk? I would have expected the Physical disk to be slightly faster but in fact the process ran 100 times faster when it's work drive was set to a partition on the SAN.
Our guess is that the SAN is optimised out of the box to be fast whereas the physical disk tuning settings are OS (Solaris) related and have not been touched or the OS patched.
During the highest activity the disk I/O was running at 100% and the time to complete a write was over 2 seconds as several processes were writing to the disk at the same time.
(FYI the application involved was Informatica PowerCenter)
I'm not at all surprised. SAN arrays typically have a LOT of disks involved. The limiting factor for disk I/O is the speed of the individual disk, and these stack. 6 drives locally in a RAID10 will perform better than 2, and 80 drives on a SAN will perform better than 10 drives locally. There are variables of course, but that's how it's supposed to work.
Also, if the SAN has any SSDs involved, things get really zippy.
It's almost certainly due to caching. The DAS probable has minimal caching, where most Enterprise SANs have multiple gigabytes of cache. I'd guess the app is saturating the DAS's cache, but not the SAN's.
Conceptually it always feels like serving disk from SAN should be slower than serving it locally. However, there are plenty of factors which can reverse this and result in the SAN being a much faster option. Some of these factors are:
All of these will affect your performance on SAN and local disk.
It all comes down to how many spindles are available.... The higher number of spindles the quicker it is to access any given piece of data. if you are heavy IO intensive, particularly if you are a database app, then you can quite easily bury local disk performance with a SAN solution which can have a far higher number of disk sets for the management of core data, indexes and such.
With the local disk subsystem you are also likely sharing access to the read/write heads with other operations, such as r/w to swap, local OS and library file access, application access, etc... While individually fast, the collective time for all of the read/write actions to move the read/write heads from one area of the disk to cover one set of actions to another to satisfy your application requirements can certainly beat on performance.