We have two machines with a multi path SAS controller, each with 12 physical disks attached.
We were looking into some I/O speed oddness, and noticed that on one machine, where /dev/mpath/mpath*p1 maps to dm-13 through dm-24 in various ways, dm-13 and dm-14 have an io scheduler of "cfq" in place. The other dm devices in that list have "none" and it appears to not be able to be changed.
I believe the other devices are presented as a different set of dm-* mappings, and those DO have cfq set.
On the other machine, none of the devices selected to be in /dev/mpath/mpath*p1 have corresponding dm devices with a scheduler set.
I'm sort of out of my element here, but since iostat DOES show that some of the requests are being merged on dm-13 and dm-14 of the first box, and none of the others on either box, I suspect we are paying some price for this.
Am I digging in the wrong hole, or is this an issue? If so, how can I fix it, since echo cfq > /sys/block/dm-15/queue/scheduler
does not have any effect when "none" is the only currently listed option?
I have found the answer to my own question.
We have a slightly unusual setup perhaps, where /dev/mapper/mpatha is the whole disk and /dev/mapper/mpathap1 is the first partition on that disk.
Since we built the software raid array using /dev/mapper/mpath?p1 devices, these never have a scheduler, since they really ultimately defer to the actual underlying disk, which is the /dev/mapper/mpatha device.
All our /dev/mapper/mpath? devices have a scheduler (which I have changed to 'deadline' now) and all /dev/mapper/mpath?p1 devices do not. This is also identical to how LVM works -- the underlying disks have a scheduler, but the individual mappings to logical partitions do not.
If this is Red Hat or CentOS, please use the
tuned-adm
utility to shift your profile to the "enterprise-storage" profile:tuned-adm profile enterprise storage
Understanding RedHat's recommended tuned profiles