RAID 1 and RAID 5 (and their brothers 10 and 50) achieve data redundancy respectively through mirroring and through parity checking. This allows a RAID array to still access data when a sector on a disk (or a whole disk) becomes unreadable. RAID 6 (or 60) uses an additional check to allow for double faults.
But how can a RAID array deal with data which is not altogether unreadable, but just plainly inconsistent?
If some error occurs such that f.e. data on a stripe is changed on a disk but the change is not propagated to the other one(s), the whole stripe would become inconsistent. If in a mirrored set a disk says "this bit is 0" while the other disk says "this bit is 1", how can a RAID controller know which one is right? The same reasoning could be applied to a RAID-5 stripe, with the added complexity that you can't easily know which sector is actually wrong in the stripe. Also, does RAID 6 mitigate this issue with its double ckecks, or can it still have troubles recovering from data corruption when data is actually readable but it's wrong somewhere, especially as RAID 6 arrays tend to have lots of disks?
This could theoretically be solved by checksums, to ensure which copy of the data (or parity) is the correct one; but does any RAID controller actually implement this kind of checksum (which would of course take up additional space)? Or does it need to be handled at the OS level, where most filesystems can and will checksum their contents? And if this is the case, how can they tell the RAID controller "data on sector X on disk Y on stripe Z is wrong", when the general approach of a RAID controller is to abstract the OS from the underlying storage layer as much as possible?