I recently deployed a server with 5x 1TB drives (I won't mention their brand, but it was one of the big two). I was initially warned against getting large capacity drives, as a friend advised me that they have a very low MTBF, and I would be better getting more, smaller capacity drives as they are not 'being pushed to the limit' in terms of what the technology can handle.
Since then, three of the five disks have failed. Thankfully I was able to replace and rebuild the array before the next disk failed, but it's got me very very worried.
What are your thoughts? Did I just get them in a bad batch? Or are newer/higher capacity disks more likely to fail than tried and tested disks?
You probably got a bad batch. I am nervous about deploying arrays built from disks from the same batch for that reason -- they are likely to have a similar life-span, which makes getting replacements potentially very exciting when one fails.
It isn't impossible that there is some design defect with the drives, that's definitely happened before; however usually the Internet is full of complaints about the drive if there is really something wrong with it, as opposed to the usual background noise that you'll find about anything.
This is a hard question to answer unless you have the resources of a large organization. See Google's research into hard disk failures.
When doing a significant purchase of disks, I will determine the rough disk size with the lowest cost per byte, which is generally one generation older than the latest. This makes sense that they will improved that generation's reliability.
More platters + more heads equals higher chance of failure.
Take two common WD hard drives
640GB = two platters
1TB = three platters
That extra platter = more noise, more power usage, more heat, slower drive ready time, more susceptible to shock damage, and more vibration.
If they made the same drive design with only one platter it would have even better specs. In this case these are consumer grade drives but they are high end consumer grade drives with double the cache and a 5 year warranty. You'll see similar math if you closely inspect the documentation on any brand or style of traditional hard drive (spinning platters). It's purely a matter of physics that more platters makes a drive less reliable.
Jeff Hengesbach was also right when he said
add in some small dose of Graeme Perrow
More platters = bad
More storage space is a mixed bag. Pros and Cons on that are numerous.
More sectors really is more chance for errors. Not necessarily linear in scale but definitely a factor.
Unless you need space more than reliability I would suggest sticking to single platter or dual platter drives. It takes research and in some cases luck to know what you'll get when ordering drives as some manufacturers not only avoid publishing the number of platters they may actually sell more than one drive under the same part number.
Take for example the WD3200AAKS there is a single platter 320GB version and a dual platter 320GB version (160GB x 2). On top of that there are multiple lables and drive housings being used so you can't easily look at the drive and know which platter is inside. The only way to know is to search online to know that WD3200AAKS-00B3A0 and WD3200AAKS-75VYA0 tell you which is single platter but no retailer will tell you which you'll get.
I believe a higher than normal failure rate is indicative of any new technology. I've always been told never to buy the first model year of a car, wait until they work the bugs out. I'd say the same thing probably holds true for many other things, including hard drives.
I'm not sure it is fair to say 'large' disks have a higher MTBF or not. I have a big name system with a handful of 750GB drives and in the past 2+ years none have failed (750 was "big" 2 years ago). But I also know a big name system that was build when 250GB was big and that array has fell over a few times. The MTBF debate is something of a holy war.
The primary concern with 'big' drives is the rebuild time when a failure occurs. The larger the drive, the longer the rebuild, the larger the window for additional drive failure and potential loss of the array. With "big" drives the business value of availability should determine a level of acceptable risk(array loss) which will drive your RAID level selection and drive count(More drives = more chances of drive failure).
Business SATA / RAID has come along way in the past handful of years. I don't think the big names would offer it if they knew it would be a major support issue or a source of customer let down. I'd be curious to know your reliability going forward now that you've replaced some of the original batch.
Always use smaller capacity hard drives for production use. Never checked the physics behind it but smaller disks just tend to break down less often. That's what everybody always told me.
Are they all on the same computer or disk controller? You did say you had to rebuild the array. If this is the case, then maybe something is faulty with the controller, power supply, or memory. If not I would also guess a faulty batch of drives. Also, there might be a compatibility issue with whatever particular drives you are using with that particular controller.
Also, I wonder when people say that larger disks have a higher MTBF how that is calculated. Lets say you have 2x250 GB and 1x500 GB disks. Maybe this is naive, but wouldn't the drive that holds twice as have more data it could fail with? I guess I don't know if MTBF includes any misread or miswrite, or if it means the disk becomes mechanically broken. Does anyone know if there is a strict industry standard and definition of MTBF for hard disks?
Here are a few things I would check: 1) Are the serial numbers on the drives pretty close? If so you might have a faulty batch 2) How is the environment that your server lives in? Have you had issues with other hardware failing recently? 3) Does the drives happen to be Seagate Barracuda drives? There are issues with those drives. See this computerworld article on it. 4) Did these drives come as part of a system? or did you buy them yourself? If you bought OEM drives there is no way to ensure that the drives were handled with care before you purchased them.
I've personally had incredible luck with hard drives. I've only had two drives fail on me. Only one of those failures was on a drive I was actually using. However, all around me I've seen lots of people lose data on hard drives.
The higher failure rate of large drives could just be a function of the size of the drives. A drive with fifty million sectors has ten times the chance of having a bad sector than a drive with five million sectors. I'm assuming the failure rate among large drives and small drives is the same here, which is probably not a good assumption -- as someone else said, the fact that terabyte drives are still relatively new, they probably have a a higher failure rate to begin with.
In your case, it just sounds like a bad batch of drives.
If you bought all the drives at the same time from the same place it is possible that they all come from a single iffy batch.
When putting together a RAID array I generally recommend mixing drives a little, i.e. a mix of manufacturers or at least drives from different suppliers (to reduce the risk of all the drives being from one bad batch).
Another recommendation I would make is to use smaller drives if possible (i.e. you have physical space for the drives and controller ports to hang them off), so instead of a RAID 1 volume or two 1Tb drives have a RAID 10 of four 500Gb units. This way when a drive goes bad you are only rebuilding a smaller array which is part of a larger array instead of rebuilding the whole array (reducing the length of time during which the array is not complete), and it also offers a little more redundancy (in four of the six of the "two drives fail at once" scenarios a 4 drive RAID10 array will live). You can do the same with combing smaller R5 arrays into an R50 array too if supported by your RAID controller/software.
Maybe I'm overly paranoid, but I would be wary of trusting 1Tb of data to one single drive, even if that drive is part of a redundant array.
Obviously there are physical constraints at play which may make the technique impractical to you, power draw constrains too, so YMMV. As a "for instance" when an array or arrays isn't practical: I'd rather have four drives as R10 in one of our servers here in place of the larger drives in an R1 array, but it doesn't physically have room, buying/building an external array was out of budget, and we could not use space on an existing array as the data had to be kept physically separate from all other data due to data protection requirements.