I have a test box (PowerEdge 2950) with a Perc 6/i and 4x 15.5k SAS drives attached (with 512 byte block sizes). These are in a single RAID 5 virtual disk with a 64KB chunk size.
I am creating a single test partition that spans the whole drive. Should it be aligned to the 64KB chunk mark, or 512 byte block size? If the later, the partition could start at 2048 bytes into the single virtual disk, meaning it will begin at the 2nd free block on the first drive (I assume)?
Also, I will add another two drives and recreate the RAID virtual disk at a later date for more testing, should the partition then be created at 6x512 bytes, so from 3072 bytes?
I have read a couple of similar questions on this but I couldn't see from those, how the chunk size of the RAID volume might relate to partition alignment, on drive block size when using a single drive.
If you use the a starting sector of 2048 (512 byte) sectors, then your partition will start 1MB into the drive. This value is used as the by default on most newer installers. This number is nicely divisible by 64k, and most other common chunk/block sizes.
If you are partitioning with fdisk then make pass the
-u
flag. So it reports the values in 512 byte sectors instead of cylinders.Since you are using ext* you can use this calculator to determine the strip size and stride width for the filesystem. I am showing that you would want to create your filesystem with these options:
mkfs.ext3 -b 4096 -E stride=16,stripe-width=48
. You might want to try just creating the filesystem without passing options and seeing what mkfs detects and uses (check withtune2fs -l /dev/sdnn
). These days it seems to do a pretty good job automatically detecting the size/width.Your math is wrong. In a 4 disk RAID5 array there are (simplistically) 3 data disks and a parity disks, which is why if you have 4 80Gb drives, you get 3*80 or 240Gb of usable space on the RAID array. So, by your assumptions, starting a partition on at 2048 bytes into the drive would start on the 2nd block of the 2nd drive.
But, in fact, your premise is wrong anyway. If you've ever watched the disk activity lights on a RAID5 array, you'd seen that they all flash together, except when doing a rebuild. In other word, the RAID5 controller actually caches the disk read & writes and executes them in parallel across all the drives (obviously, during a rebuild, all but one of the drives operate together while the rebuilding drive is usually on solid). This is so it can guarantee consistency.
Of course, it's reading and writing 64Kb chunks, so it you started you partition at the 192Kb boundary, you might just see a fractional improvement when accessing files right at the start of the partition. But, assuming this disk isn't going to have a few very large files (i.e sized in multiples of 192Kb) being read sequentially, in normal operation, the heads would be moving all over the disk(s), reading/writing files allocated in 4Kb chunks, which would swamp any gain from the alignment of the partition.
In conclusion, since the Perc 6/i is a hardware RAID controller, I'd just let the OS partition the drive as it recommends. The alignment of the partition is not going to have a noticeable effect on disk/file access speed.
Your partitions should be aliged to your stripe width (chunk size * number of data bearing disks). You should be aware, however, that this barely scratches the surface of the alignment optimisation, and you need to make sure everything from the RAID chunk size, to the file system metadata, to the application I/O size needs t be aligned for optimal performance and to ensure there is no unnecessary read/write amplification. I wrote an article on this subject of optimising file system alignment, which you may file useful.