Apologies for the horribly vague question...
What are the error rates for reading and writing to storage devices, and do our file systems and OS's account for this?
I've had bad sectors on my laptop's 320gb hdd for years now, and it seems to keep chugging along. I keep backups of everything because I'm anticipating a major failure anytime now.
What happens when a HDD has a bad sector? Are bad sectors common? Will operating systems work around bad sectors?.
What about the occasional HDD read write errors - seemingly these errors could cause catastrophic damage - do file systems have a mechanism for detecting and correcting these problems (eg. check sum)?
Now guess what RAID was invented for ;) Seriously.
Read error.
There are tons around.
Well, the disc will not use the sectory anymore and map them to spare sectors it has in reserve for that. The OS will see nothing. The data on those is gone.
Read up on RAID which is done exactly to handle issues with storage device reliability. Never trust normal storage. What you do if tomorrow the disk just dies? Yes, happens. Quite frequently.
It varies a lot, and depends on the manufacturing processes surrounding the drive in question. There are a couple of different types of storage errors that can impact data and data access:
The non-recoverable read error rate is something that shows up on hard-drive spec sheets. It'll be listed as something like, "1 sector in 10^14 sectors read". This is what will limit how large of a RAID5 array you can create with drives of that type.
Bad sector error rate can't be predicted, at least with rotational media. They just come. This is why drives have a small amount of sector-relocation space. When an OS attempts to write to a sector that has suddenly gone bad, the drive will write that sector instead to the relocation area and increment the Bad Blocks list in the SMART data. Problems start arriving when the drive runs out of relocation-blocks, at which point it is really seen by the OS. Handling methods vary by OS and file-system.
The newest filesystems on Linux (btrfs, and probably ext4) support checksumming of journal and data writes. This allows for catching when a bad sector has been encountered and not reported by the media (such as when the reallocated-sectors are fully consumed). This is a somewhat expensive operation, as things go, but things are fast enough these days that it's not much of a problem for most workloads.
Windows NTFS does not support journal or data checksumming, and probably won't for several years. Bad-sector detection is done after the fact when reading data fails for some sectors or clusters. Checksumming is a very new idea (relative to file-system development) that hasn't fully percolated out to every corner of the market yet. The most nimble operating systems (Linux, probably BSDs, until recently OpenSolaris ZFS builds) get it first, the rest will get it eventually.