I keep seeing articles describing the RAID IOPS write penalty for RAID 1 (and RAID 10) as 2. RAID 0 would have a penalty of 1, of course, since every write is simply written to disk. RAID 1 is described as "requiring two writes", thus a penalty of 2.
But shouldn't it be 1, since data is written simultaneously?
From the viewpoint of the application or server using the disk, a RAID 1 array should appear as a single unit which writes to both disks simultaneously. One disk may lag behind the other one, but an actual hardware RAID controller should be capable to begin the write at the same time and report the write operation as complete when the slower disk has completed, which should be only marginally higher than in a RAID-0, if at all. So the IOPS penalty should be 1 for RAID 1 or 1.2 at the maximum.
I understand there are two write operations, so there are 2 "IOPS", but they are internally to the RAID controller.
Am I missing something here?
If RAID 1 was just hotwiring a cable the performance impact would be null (a factor of 1.0), but RAID 1 mirroring is more than just hotwiring a cable - actual work needs to be done to write data to two drives and handle the results of that write from each drive.
That extra work is the factor they're talking about in the performance impact. Whether the I/O operation happens in the OS somewhere (software RAID) or in a dedicated co-processor/controller (hardware RAID) two writes still need to be issued for every piece of data, and the results of that write (success, failure, or on_fire) need to be "handled".
In the worst case you're likely to encounter (software RAID-1 implemented in the OS) that means the kernel is doing two writes, and having two conversations with the disk controller.
That's a write penalty of 2x since we're doing twice as much work almost all the way through the stack.
(Really it's probably closer to 1.9 - after all we're not issuing two write() calls to the filesystem - but let's just round it off for the sake of pessimism.)
In the best case (hardware RAID 1, implemented with a dedicated controller) the kernel is having one conversation with the controller, but the controller is still having 2 conversations (one with each disk) as it needs to ensure both drives receive the command, write the data out, and acknowledge that the data was written (or handle any error conditions the drives report).
That's probably about a 1.2x penalty for the controller's extra work as you surmised in your question - you're just saving yourself the extra in-kernel work (which is far more expensive than what the controller is doing).
Now because we're sysadmins and we're paid to be a pessimistic lot we're obviously going to take the worst case performance, just like when we rounded the performance factor for software RAID - so if anyone asks we're going to tell them there's a 2x write penalty, even for their fancy hardware controller, and let them be happy when the system performs with only a 1.5x penalty on average :-)
Each write has half the total performance available, though. In your example, each disk of the RAID 0 only needs to write half of what was sent to the RAID controller. In RAID 1, each disk has to write it all.
This puts RAID 0 in the ballpark of twice as fast as RAID 1 for writes (on a 2-disk RAID group), while they're equal in theoretical read speed.