Jump to content

It 's getting painfully obvious that we need to do a review of basic RAID usage to get at least ...


Recommended Posts

It's getting painfully obvious that we need to do a review of basic RAID usage to get at least up to 2009 standards.

 

http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162

http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805

http://www.smbitjournal.com/2012/05/when-no-redundancy-is-more-reliable

http://www.smbitjournal.com/2012/07/hot-spare-or-a-hot-mess

http://www.smbitjournal.com/2012/08/nearly-as-good-is-not-better

http://www.smbitjournal.com/2012/11/choosing-a-raid-level-by-drive-count

http://www.smbitjournal.com/2012/12/the-history-of-array-splitting

http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage

http://www.smbitjournal.com/2012/11/hardware-and-software-raid

http://mit.miracleas.dk/baarf.aspx

 

Basically, RAID 5 on HDD is right out. I've been told multiple times that "If a rebuild encounters an error, it will just skip that sector and move on." The thing is, this is not the case. I've experienced this many times over the years, and the fact is that no, it doesn't just skip an error like that. Add to it the fact that we're talking about 4k per sector today versus 512b, and you can loose a lot more data in a single sector.

Link to comment
Share on other sites

Benjamin Webb At least your using raidz2, which means you should be ok for now. It's when the array becomes degraded that I'd be a little worried.

 

Also, keep in mind that those articles were written years ago now. Drive sizes are magnitudes of times larger than they used to be. The number of drives doesn't matter, it's the total size of the array that's the sticking point.

 

Plus, URE ratings have gone back down to 1^14 recently for many drives. It's one reason that those 8TB and larger drives are more expensive per GB. A drive at 10TB isn't terribly useful if you can't read it all back successfully, so most of them are at 1^15.

Link to comment
Share on other sites

Travis Hershberger I see many of these articles but they are usually followed with ones from backblaze where they record as little as 2 failures for 10,000 drives for Hitachi. So mostly are full of it in reality. Be like being struck by lighting to lose more than one drive at the same time.

 

Your main points of failure is not your hard drives it is your power supply and your drive controller as you only have one of those lol. You can easily lose the lot that way. That is why you are an idiot if you don't back up as I stated before.

 

I think there was a question up today about a NAS in trouble because his UPS messed up and cycled the unit like crazy.

Link to comment
Share on other sites

Travis Hershberger True, raid is not backup but we are at a point where the drives people use are the most reliable in practice I have ever seen and I still keep seeing all these articles that say the sky is falling form people that have never worked in a server room recently.

 

I would put a 36 disk raid 5 array of the largest size vs any two Seagate drives from the ninties in raid 1 and I bet the raid 5 would win.

Link to comment
Share on other sites

Benjamin Webb Performance wise, of course. Reliability wise? Really? I just replaced a friends hard drive where the drive was 6 months old. The problem is recovering the array, and I can guarantee a RAID 5 with 36 large drives WILL FAIL to rebuild. At least the old drives would stand a chance of a successful rebuild.

 

Hard drives fail all the time, otherwise we wouldn't constantly be talking about RAID arrays and backups.

Link to comment
Share on other sites

Travis Hershberger

 

backblaze.com - Hard Drive Failure Rates: The Results from 68,813 Hard Drives

 

According to their numbers they average a 2% chance of failure each year with 40,000+ drives. Which works about to about a 10% of drives failing in five years. If you picked Hitachi even lower. So about 1 in 10 drives would die resulting in about 3 dead in five years and odds of them hitting at once are low.

 

Those seagates I was referring to were low capacity drives that had something like a 90% failure rate in 3 years. I think HP even sued them over it.

 

We have gotten better at building stuff and nobody seems to account for that.

Link to comment
Share on other sites

 Share

×
×
  • Create New...