Discussion in 'Storage & Backup' started by Yehat, Jul 14, 2020.
capacity based upgrade, almost always before the warranty has expired.
When I ran my unraid server it wasn’t till fail and I think all unraid users and most nas users would do the same except for those running low on space. In a pc well probably when near warranty for important data shit stuff like movie drive well first bad sector ect
My pcs tend to last 3-5 years. So new pc. New storage drives with newer faster technology. Old drives gets put in cupboard as perma backup.
I really slowed down in collecting data, as a consequence I didn't replace a pair of old Hitachi's until the year before which makes them about 9 years old. That in itself probably wasn't necessary but the price was definitely right - and still would be if you could buy comparable drives at that price!
Im still running a 2008 NAS with 4x 1tb WD greens. It stores movies, tv shows etc. i backup family photos, home movies to the cloud, but ultimately anything else that gets lost so be it.
Obviously I wouldn’t be so cavalier if I was talking business data.
My first real storage setup was back in 2009, 8*1tb drives in raid 5 that resides in my main pc. That served me well till 2015 when I built a dedicated NAS with 5*5tb in a Z1 array. It has about 30% space free still and won’t be replaced for years to come
I eventually grew out of my hardcore downloading habit that occupied a lot of free time from early 2000s until about 6-7 years ago. Now with netflix, youtube, FTA catch up apps etc, and less free time than ever with 3 kids, I lack the time or desire.
Hopefully i get 15-20 years out of this NAS. Its only on a few hours a day, which probably helps prolong its life.
I never retire a hard drive just because its old. If you have a good backup regime going, getting back up and running after a broken hd is only an hr or 2. Plus I would much rather trust a old drive thats proven to be reliable than a completely new drive in its first few hundred hrs of service.
I retire my drives when they drop an error. Got an alert on this WD Red 2T drive yesterday after almost 6 years of spinning 24/7. The other 5 identical drives are showing 0 read errors so far, but now that one has gone, new storage is in my immediate future, and with the SMR stuff, the new ones won't be WD most likely. The cost per year over 6 years is low, so buying 6 new drives isn't too painful.
# smartctl -l selftest /dev/da2
smartctl 7.0 2018-12-30 r4883 [FreeBSD 11.3-RELEASE-p7 amd64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed: read failure 90% 49947 1973788747
Yep, I'm another one who only changes them out when they start throwing errors.
Sometimes it take me a while to notice (had changed my email and forgot to change it for notifications on my backup NAS)
You turn yours off? For drive longevity? I never bother with turning it off, these nas specific drives should last a long time anyways.
General consensus is that the spin up / spin down cycle is the highest risk for mechanical failure of a drive. That not only puts strain on the spinning mechanism, but also the temperature change results in metal expansion/contraction which can cause issues.
I suspect that this is less of an issue for SSD, and theoretically they should last longer if powered off when not in use? I'm not entirely sure though, and I suspect it'll take a decade or so for large scale enterprise users to report in on their findings and let us know (similar to the above statement, made off the back of reports from folks like Google and BackBlaze over the years).
i replace them every 3 years, i used to hoard data but did a huge purge so it's not really a big deal. i basically have 4 copies; PC, offline local, offline remote and cloud.
I’ve started using YouTube as a private cloud for some shows I like. Have had copyright related warnings & the odd auto deletion but no “strikes” as such.
I recently replaced five 2TB HGST Ultrastar drives that had 30,000 hours on them. The old drives had no errors but the eBay ones were too tempting at $110 each for 3TB to pass up. They were OEM drives that only had a 1 year seller warranty which is one reason they were cheap but after buying one drive & seeing that it had zero hours on it & was a genuine HGST drive I bought four more. Another reason they were cheap was that my old drives came up as SCSI in Device Manager meaning they were NAS drives with a SATA connection while the new ones showed as SATA but that was no big deal. Normal retail HGST drives have a 2.0 million hours MTBF specification and a 5-year warranty. Mine were OEM rather than retail drives but as far as I can tell the same drives as the retail ones.
$110 for 3TB Ultrastar OEM drives is still a bit too much to spend on. Especially considering they are OEM overstock spares.
$110 for the 3TB version seemed like a good deal to me considering that the 4TB version (with a 5 year warranty) sells for $365 from Umart.
Hmm same drives I was selling a month ago for 100 a pop. Didn't realise they still sell these drives for that much.
Modern, or even semi modern, HDD's have insanely high MTBF figures so failures should be pretty rare before they are naturally replaced by larger drives.
My old NAS (2010) is still running 7/8 of it's original 2TB drives (and even then, they were just Seagate green drives I think, not NAS units). One threw a SMART error at about 4 years of 24/7 spinning and eas replaced with some other random 2TB drive. That machine was semi-retired about then to be the next line of backup. It only powers up once a week, does an incremental and versioned backup of the important files from the main NAS unit then shuts down again. It's been doing that for many years without any further issues.
The current NAS was purchased in 2014 along with 8 x 4GB HGST drives, two of which had SMART errors in the first month and were replaced. 2,085 days and counting on the originals and counting.
The NAS itself was upgraded to an i79 3770S and 16GB RAM so is more than powerful enough, even by today's standards, to do everything it needs to, although I've migrated most of the VM's and services to other devices (Raspberry Pi, dedicated SFF machine) so it's not so necessary anymore.
A full replacement is in $3 - 4$k range for a new NAS, at least 50% of that is the drives themselves, although I can now do it in much higher density with a smaller number of disks.
Indeed and most of the time besides mechanical wear a good % of SMART issues are usually triggered by prolonged use at high temperatures, which in most cases I've seen start causing the drive to throw all sorts of logical mapping errors indicated by SMART. Usually bad sectors, uncorrectable sectors, random disconnections etc.