So, I have a server with a 3.5TB mysql database with innodb file for each table. It has 24 2.5" 10K HDs in 4 disk RAID 10 groups attached as 1TB datastores via vmware ESXi. All 6 are LVM striped into one 6TB ext3 disk
Right now, I'm doing a
sudo e2fsck -f /dev/vg1/lv1
before a
sudo resize2fs /dev/vg1/lv1
and here's the results of iostat -x 5:
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 33.40 0.00 267.20 0.00 8.00 0.17 5.15 5.15 17.20
sdc 0.00 0.00 36.20 0.00 289.60 0.00 8.00 0.14 3.76 3.76 13.60
sdd 0.00 0.00 33.20 0.00 265.60 0.00 8.00 0.14 4.28 4.28 14.20
sde 0.00 0.00 35.80 0.00 286.40 0.00 8.00 0.18 5.14 5.14 18.40
sdf 0.60 0.00 32.80 0.00 267.20 0.00 8.15 0.18 5.37 5.37 17.60
sdg 0.00 0.00 35.60 0.00 284.80 0.00 8.00 0.19 5.22 5.22 18.60
dm-0 0.00 0.00 207.60 0.00 1660.80 0.00 8.00 1.00 4.80 4.80 99.60
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
I've looked through the other posts on here and Google about LVM performance and none of them really give me enough info to diagnose if LVM could be bottlenecking the IO of the disks. Tho, here it appears that dm-0 is maxing out in %util while the actual disks %util are all in their teens.
Is there something I could do to fix this? Stripe twice so that it is RAID 1000 instead of my RAID 100? Does iostat just report wrongly for LVM?