Does anyone know the steps to verify that a XFS filesystem on top of LVM and md RAID is properly aligned on an array of 4096 Byte aka "Advanced Format" sectored disks?
Some references are:
http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html
http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/
Also the context of this question is regarding Stack Overflow's new NAS: http://blog.serverfault.com/post/798854017/the-theoretical-and-real-performance-of-raid-10
Verification is a tricky one. My first thought was to do a series of direct-IO 4KB reads from the media and watch the blinkin-lights. If every xth read causes two drives to flash, it's a sign of misalignment (4kb read just spanned a RAID stripe boundary). However, you're 3000 odd miles away from the hardware so that won't work for you.
I'm assuming your RAID stripe width is larger than the 4KB sector size. The test I thought of a bit ago is to do a stride read/write test. This is where you read/write every x 4KB sectors. Vary the offset and you can change where in the RAID stripe you're testing. If certain offsets show different performance, I'd consider that a sign that the specific offset is spanning a RAID stripe for a 4KB operation. That would verify that XFS is aligning properly in the RAID config.
Verifying the RAID stripes are aligned correctly could be done with the same kind of stride test and keep an eye on the 'iostat' values for the individual drives. If you get the stride size right, you should only see activity on two drives at any given time. If the same test shows activity on all four drives, then you've got proof that something is misaligned.
I know for sure the storage benchmark IOZONE has the ability to do a stride test, and I'd be very surprised if the more common IOMETER couldn't do that. The ability to use direct-IO and bypass caching and write-combining is critical to these kinds of tests, though.
It's a personal thing but I think this alignment business is overplayed - I dare say there are low-single digit performance benefits to be had if you sweat the last details but given the size of modern day caches plus the complexity of disk to memory chains I wouldn't sweat it to much.
But that's me ;)