We have a new Dell 2950 with PERC 6/e and 14 external SAS 15K 73GB drives. An Oracle 11g database job takes 3 hours to run with the drives set as hardware RAID 10 (striped across 7 mirrored pairs). The database size is about 26GB. The same job running on just two drives in RAID 1 takes only 1 hour. OS is Win 2008 R2.
Before we change the RAID level (with considerable downtime) on the production box, does anyone know why we're seeing this odd result, and if there's a better way to fix it?
ADDED INFO
PERC 6/e should be running the latest firmware and cache battery OK.
FINALLY, THE REAL STORY
After speaking w/the DBA, my face is red. Turns out the RAID 1 is seven RAID 1 volumes of two drives each. The data tables and indices were assigned to each volume to minimize contention. Apparently a good DBA can get more performance from 14 drives than a RAID 10 controller striping blindly across them without regard for file access patterns. Some SANs claim to intelligently migrate files to improve performance, but if there's a bake-off anytime soon, my money's on our DBA!
I think user71281 implies that your RAID controller (or driver) is messing up. When you go through the RAID setup of your controller (or driver), a RAID10 setup should never be slower then a simple RAID1.
Your RAID solution has either allowed you to setup an extremely inefficient RAID10 array, or you have uncovered a bug. Maybe performance improves with an 8th pair? Or maybe when you reduce the setup to 4 pairs? This last option may mean you have to upgrade to 146GB disks.
But I'd check for firmware updates first, and check how much RAM is on the RAID card. It didn't switch off its caching function because of a dead BBU (battery backup unit), did it?
Actually a RAID1 is faster than a RAID10 and of course a RAID 0 is faster than both of them. The reason is the parity and calculation overhead is highr with a RAID10 than 7-RAID1 volumes. As your DBA stated there is less disk contention and more spindle isolation.
The downside is more chances to run out of disk space for each RAID1 pair. The seperation is great when running many unlike transactions however ther eis a performance benifit in striping multiple drives in a RAID1 as now many spindle work together in parallel.
One thing to consider is your cluster size when creating a smaller disk will be smaller. You may want to use DISKPART and set the Cluster size to 8K for your logs and 64K for your data and not mix logs and data together on the same raid1o. This is most likely how your DBA had it setup and now you ghave combined both on the same RAID10 causing more disk contention.
For the record: the migration can be done online by doing a temporary software raid1 for the migration time. This means you need one more set up storage. Nowadays one could borrow a bunch of SSDs and attach them as a large iSCSI volume to do the whole thing without downtime. Generally, when migrating storage, the right way is to do it by adding redundancy, and then removing the old part. So the highest risk time during the migration will be before you start and after you're done. Because you just run on the intended redundancy.