I have a NAS appliance that is just over a month old. It is configured to email me alerts generated from the hard drives' SMART data. After one day, one of the hard drives reported that a sector had gone bad and been reallocated. Over the first week, that number climbed to six total sectors for the hard drive in question. After a month, the number stands at nine reallocated sectors. The rate definitely seems to be decelerating.
The NAS is configured with six 1.5 TB drives in a RAID-5 configuration. With such high capacity drives, I would expect a sector to fail from time to time, so I was not concerned when the first few sectors were relocated. It bothers me though that none of the other disks are reporting any problems.
At what rate of relocations, or total number of relocations, should I start to get worried for the drive's health? Might this vary based on the capacity of the drive?
Re-reading Google's paper on the subject, "Failure Trends in a Large Disk Drive Population", I think I can safely say that Adam's answer is incorrect. In their analysis of an extremely massive population of drives, roughly 9% had non-zero reallocation counts. The telling quote is this:
It's even more interesting when dealing with "offline reallocations", which are reallocations discovered during background scrubbing of the drive, not during actual requested IO ops. Their conclusion:
My policy from now on will be that drives with non-zero reallocation counts are to be scheduled for replacement.
Drives, like most components, have a bathtub curve failure rate. They fail a lot in the beginning, have a relatively low failure rate in the middle, and then fail a lot as they reach the end of their life.
Just as the whole drive follows this curve, particular areas of the disk will also follow this curve. You'll see a lot of sector re-allocations in the beginning of using the drive, but this should taper off. When the drive starts to fail at the end of life it'll start losing more and more sectors.
You don't need to worry about 6 (depending on the drive - consult the manufacturer), but you need to watch and see the frequency of each new reallocation. If the deterioration accelerates or stays the same, worry. Otherwise, it should be fine after the initial break-in period.
-Adam
Different drives probably have different parameters. On a drive that I last checked that was a 1TB enterprise series disk from one vendor there were 2048 reserved sectors for reallocation.
You can estimate the number of reserved sectors looking in the S.M.A.R.T. report on a drive that has got a nonzero number of reallocated sectors. Consider a report on a failed drive below.
Here 95% of its reserved capacity has been used which is 1955 sectors. Therefore the initial capacity was about 2057. In fact it is 2048, the difference is due to the rounding error.
The S.M.A.R.T. turns the drive into a failing state when the number of reallocated sectors reaches a certain threshold. For the drive in question this threshold is set at 64% of the reserved capacity. That is roughly 1310 remapped sectors.
However the reserved sectors are not lying in a continuous span. Instead they are split into several groups, each group is being used for remapping sectors from a specific part of the disk. This is done to keep the data local to an area on the disk.
The downside of locality is that the disk might have many reserved sectors. Yet one area may already run out of reserved capacity. In this case the behavior depends on the firmware. On one drive we observed it go into a FAILED state and block when an error occurs in a part that is no longer protected.
You might want to run a S.M.A.R.T. long self-test, if the drive supports it. This may give you more information about the status of the drive. If your NAS cannot do this, and if you can pull the drive out or power down the NAS for a few hours, then you can do the long self-test with the hard disk plugged into another machine.
When a drive this new behaves like this it's not to be trusted at all!
Send it back as soon as possible, and get a replacement drive.
Different manufacturers have different "acceptable loss" numbers (same idea as with monitors and bad pixels). Check with the drive manufacturer to find out what their standard is.
It does look like a bad trend though...
Western Digital specially proud by technology that recover bad sector in acceptable time instead of freeze disk placed in RAID, its name TLER (http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery). The time is typically 5..7 seconds.
As I found on web there are WD disk drives with disabled option but some peoples enabled this feature on cheap Green WD drives then place them into RAID.
WDTLER utility removed from WD support site but can be easily discovered via Google.
P.S. I use this utility only for reading status and I not use RAID by now :)