I recently had a HDD failure on a SW RAID1 system (Debian 6.0) and what happened was that the active HDD looked like it had some badblocks which somehow propagated to the HDD that was still OK but it was set as spare and couldn't synchronize. This basically is my assumption as I cannot say for sure.
I was wondering if any of you knows if it is possible that the errors from a broken HDD to propagate to the other HDD and if so if there is any setting for something like this not to happen?
Any insights on this matter would be greatly appreciated. Thank you.
If Linux software RAID knows it is reading corrupted data it will not mirror it. However, if your disk is failing and providing incorrect data silently, there's no setting or such to recover that in RAID. It simply does not have knowledge on which data to trust if blocks are not equal on both disks.
However, you mention it did identify the blocks as being 'bad'. In such an event mdadm will kick (marked as faulty) that disk and you'll have to start the array degraded manually using the correct disk. It will prevent you to get back in sync with that faulty disk unless you're forcing it.
The best approach in trying to prevent silent data corruption is using file system level mirroring, like ZFS and btrfs offer. It will withstand some data corruption at physical level, because it checks all data by using parity calculations. It may be slower in some cases, though.