Related to this question: When building a storage area network, do storage administrators generally create the physical disk arrays based on I/O type (sequential versus random access) and provision LUN's from those arrays to hosts needing that particular type of I/O?
It depends™. For your standard low-usage VMs, I generally toss most of them on a few RAID 6s and let caching and tiering handle the bursts. For things like a heavily-hit Exchange DB server or SQL Server, they typically have dedicated LUN(s) that are designed for their needs.
It really does depend on your storage architecture, though.
I try to match the expected I/O demands to the architecture. For almost everything I've done, the SAN approach of if you you throw enough disks at it, total I/Ops exceeds all but specialist workloads ends up working pretty well. If I've got 96 spindles in use, the total I/Ops I have at my disposal is enough to keep the SQL-sever log-files fed even with an Exchange server running on the same disks.
Where theory (I/O separation is best-practice) runs smack into reality is in how some of these big SAN arrays are designed. While the sequential I/Ops of four 15K RPM disks is rather high, a lot of these arrays intentionally randomize block layout so you're not going to get that performance even if you create a 4-disk Disk Group dedicated solely for one SQL Log volume. They will probably run into some sequential benefits, as the block-size is usually larger than the 4K used by the disks themselves, but it won't be the screaming demon you'd expect from a pure-sequential load.
Where it comes into play is if I'm building dedicated storage for a database. This assumes that the I/O needs of this one service are so great, or the I/O SLA is strict enough, that having performance guarantees is a must. In that case I actually will create discrete disk-groups for things like Log, TempDb, and DB volumes. This kind of design doesn't happen very often, very rarely in fact.
I'd love to say 'yes' to this as it'd make me look super professional but to be honest 99% of the time I just buy whatever type of array I need (going through a big 3Par phase at the moment but I've been though most of them over the years) and buy as many shelves as I have space for, then fill them up with disks of the right quantity and speed and then create LUNs that match the RAID level and performance requirements - though I generally don't take into account whether the performance is random or sequential. I tend to assume everything but backups will be random IO so based my assumptions on that, at least that way if I DO get some caching or sequential benefits then that's just 'gravy'. I know this sounds a little lazy but I look after a LOT of arrays and unless you have a very fixed set of requirements there's always going to have to be some 'wiggle room' involved in loading out arrays, you can't ever get the right-first-time.
That said for local DAS I do tend to be a lot more specific as that's harder to change but I generally don't use too much of that.
No, because most volumes on a SAN will have multiple types of IO done to them. If there is a portion of your environment you can predict (like a database server is 80% random read, or a backup server being 90% sequential write), separating it would potentially benefit you if you (a) have no wide striping, and are still putting volumes on specific raids, and (b) have a controller obsolete enough that you gain any serious amount of performance by using RAID-10.
edit: having clicked through, separating database logs and data is a good idea for more reasons than RAID type. Even if you use the same RAID type, you want to separate them so the controller can use the different cache optimization algorithms for each volume.
If you have pools, the benefits from separating these workloads out onto their own disk pool are usually not worth the performance cost. With a few exceptions, the best practice is to put all volumes for all OS types into a single large pool containing all the resources.