I know how most of the various RAID's work. But I stumbled on the recommended raid10,f2 mode while researching linux software raid. I don't really understand how it works on 2 or 3 disks. could someone explain it to me? or point me to a really good article that explains it?
Actually I think Wikipedia explains it better than the actual docs. Here's the text from the article.
The Linux kernel software RAID driver (called md, for "multiple device") can be used to build a classic RAID 1+0 array, but also (since version 2.6.9) as a single level with some interesting extensions. The standard "near" layout, where each chunk is repeated n times in a k-way stripe array, is equivalent to the standard RAID-10 arrangement, but it does not require that n divide k. For example an n2 layout on 2, 3 and 4 drives would look like:
The 4-drive example is identical to a standard RAID-1+0 array, while the 3-drive example is a software implementation of RAID-1E. The 2-drive example is equivalent RAID 1. The driver also supports a "far" layout where all the drives are divided into f sections. All the chunks are repeated in each section but offset by one device. For example, f2 layouts on 2- and 3-drive arrays would look like:
This is designed for striping performance of a mirrored array; sequential reads can be striped, as in RAID-0, random reads are somewhat faster (maybe 10-20 % due to using the faster outer sectors of the disks, and smaller average seek times), and sequential and random writes are about equal performance to other mirrored raids. The layout performs well for systems where reads are more frequent that writes, which is a very common situation on many systems. The first 1/f of each drive is a standard RAID-0 array. Thus you can get striping performance on a mirrored set of only 2 drives. The near and far options can both be used at the same time. The chunks in each section are offset by n device(s). For example n2 f2 layout stores 2×2 = 4 copies of each sector, so requires at least 4 drives:
As of Linux 2.6.18 the driver also supports an offset layout where each stripe is repeated o times. For example, o2 layouts on 2- and 3-drive arrays are laid out as:
Note: k is the number of drives, n#, f# and o# are parameters in the mdadm --layout option. Linux can also create other standard RAID configurations using the md driver (0, 1, 4, 5, 6).
From what I read an f2 RAID10 array keeps at least 2 copies of each block and they stored far away from each other.
Here are the relevant sections from the man pages.
mdadm(8)
md(4)
That's interesting and well explained. However, plain RAID1 also has the feature, at least on Linux software RAID, to be able to sustain multiple readers in parallel at very good performance:
It looks RAID10, in its near layout, is more suitable to this behaviour (accelerating not single-threaded I/O like RAID0 but multi-threaded I/O). n2f2 with 4 disks being similar to RAID1 with 4 disks.
The n2 layout with 4 disks will do both: double the read performance for a single thread, and quadruple the read performance for two threads (if the Linux md RAID10 scheduler is well implemented, one thread should read on a pair, and the other on the other pair).
All depends what you need! I didn't do benchmarks yet.
First of all mdadm R10 is a special mode it is not R0(R1,R1,R1..) f2 is 2 far copies for redundancy.
Both answers are good I want to make an addition with some benchmark results. Which I could not fit them in comments section...
I have tested with intel X79 C200 series chipset sata controller (2x6Gbps 4x3Gbps) 64GB ram Xeon 2680.
Using fio benchmark line:
replace read with write and you'll have a write test ...