For reasons too complex to get into here, I'm using a 3-disk RAID1E on some linux systems. See here for more info on that.
So my understanding is that for optimizing filesystem layout on this volume, you use the following calculation:
chunk-size = 64kB
block-size = 4kB
stride = chunk / block = 16
stripe-width = stride * ( (numdisks)/2 ) = 16 * 1.5 = 24
However, mkfs.ext3 gives a warning when I use that calculation for setting the stripe-width, that it should be a power of two.
So my question is, am I doing it right? Should I be treating it like a standard four-disk RAID10, since the stripes are of the same size?
Update: it's not a degraded array, it's a fully supported configuration. Read the link from the first paragraph.
Setting your stripe width higher than 64kB will be suboptimal.
Any write of a size higher than 64kB will result in 4 writes - one write to one disk, one write to another, and two writes to one.
Just set your stripe width to 64kB.
I ran some experiments using XFS instead of ext3 on various size and levels of MD RAID. It seems like the following formula applies across the board:
Where parity is zero for RAID0/10/1E, one for RAID5, two for RAID6.
So in the case of my original question, stripe-width should be set to 48 (64kB chunk, 4kB block, 3 slices, zero parity). When I use these settings, mkfs.ext3 no longer gives a warning about stripe-width not being divisible by the stride.
You are starting your optimisation from the wrong place. You need to start from the I/O size you are optimising for (e.g. block size that your application does reads/writes in), and optimise the whole stack for that. I wrote an article on this exact subject that explains why storage stack and file system alignment is important, which you may find helpful.