I'm reconfiguring a Esxi server, and I couldn't find any useful data on the optimal stripe size for a raid 10 array (4 disks) for Esxi's default 1Mb block size (and I intend to keep that default).
Am I right in thinking that with a 512K stripe size, A single block read will result in all disks being optimally addressed (the 1Mb block being split between the 2 raid1 arrays), or am I completely wrong in my understanding of filesystem block size and raid striping ?
There is a general misconception about VMFS here - Reads\Writes from within the guest are sent directly to the SAN\SCSI block device - VMFS (and it's block size) has no real involvement, in fact there is a sub-block allocation mechanism at play that works in 64kb chunks no matter what the VMFS block size (in terms of allocation) but AFAIK for the read\write process the block size is irrelevant.
What you should do is optimise your stripe size as you would for the target OS\application within the Guest - larger stripes for large sequential IO, smaller stripes for more fragmented smaller IO, and pay more attention to partition alignment (within the Guest, and for the VMFS volumes themselves) as any benefits you gain from stripe sizing will be undone if you don't ensure that heavily used partitions are aligned.
There are some good pointers in this VIOPS article, it's got a few flaws but its explanation and advice is reasonably good.
I guess there is more important to have the FS block size and the raid stripe synced then thinking about to use 512 or 1024 as stripe size. Generally larger stripes work better for large files, and smaller stripes work better for smaller files so if you expect huge files you might want to use this big stripe size. I would go for smaller one, but you might know more about your need.