Next gen filesystems (ZFS, BtrFS, ReFS, APFS, etc)

Discussion in 'Storage & Backup' started by elvis, May 20, 2016.

  1. Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,183
    Location:
    Pacific Ocean off SC
    You can't really. But it's a pretty safe bet with the big guys. I doubt it's anything to worry about.

    Backblaze actually documented how they handle their filesystems and their strategies against data corruption here. Pretty interesting. Someone actually asked about ZFS since Backblaze actually use EXT4, and there is some good info there.

    FYI: For anyone who doesn't know, Backblaze are well known for using consumer drives rather than enterprise stuff. They're able to do that using the unique parity checking systems they've implemented. Pretty clever, and probably saves them quite a wad of cash.
     
    Last edited: May 22, 2016
  2. Diode

    Diode Member

    Joined:
    Jun 17, 2011
    Messages:
    1,736
    Location:
    Melbourne
    At the end of the day nothing is full proof and so it really still just goes to show you the importance of making sure you have good and proper backups. ;)

    In the end having multiple copies of the data floating about has just saved my skin. It's a problem I would not like to face again so I'll be looking for ways to improve. I've also been wanting to make sure my photos are safe localised disasters like fire and theft. So it's all about finding the right balance on where I want to spend my dollars to improve my overall protection.

    Sorry if dragging things a bit off topic.
     
  3. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,505
    Location:
    Brisbane
    Looks like they use standard erasure coding over bricks. Same method as Gluster's "disperse volume" setup.

    They could still use BtrFS underneath for extra protection. I recently converted our 2x 300TB Gluster volumes to BtrFS backed erasure coding, and it works quite well.
     
  4. Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,183
    Location:
    Pacific Ocean off SC
    That's what I was thinking. I'm guessing these systems were designed and implemented around 2013, which is just when Btrfs was considered stable. The article is a year old, maybe leave a blog comment? Who knows, they might be using it now.
     
  5. Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,183
    Location:
    Pacific Ocean off SC
    Ok, couple more questions.

    1. You mention you have an old AMD box at home. Have you considered running ECC RAM on it? AMD chipsets happily accept ECC RAM.

    (on a side note: one of the reasons I'm getting excited for Zen - hope they continue this trend, low cost, decently performing, power efficient ECC-compatible hardware platforms would be awesome).

    2. What are some of the considerations for using something like Btrfs over an interface like USB3.0? Does it retain its benefits? Due to the increase of streaming over home networks, I've been looking at implementing more ultra-cheap and ultra-low power network storage to homes, and using a cheap $120 cherry trail mini PCs with a couple of cheap 2TB external drives in some sort of mirror configuration. From a bit of reading it doesn't seem I'm unsure if it's worth the effort as performance statistics aren't massively promising.
     
  6. fad

    fad Member

    Joined:
    Jun 26, 2001
    Messages:
    2,460
    Location:
    City, Canberra, Australia
    I have heard of people using Intel NUCs, with a m.2 to PCIe ribbon and then running LSI HBAs on it at x2 bandwidth.

    Here is the write up of a second NIC.
    http://www.virten.net/2015/09/adding-a-second-nic-to-a-5th-gen-intel-nuc-or-other-pcie-cards/
    This one is a HBA addon.
    https://forums.servethehome.com/index.php?threads/intel-nuc-based-64tb-home-server.6846/
     
  7. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,505
    Location:
    Brisbane
    According to Google, my ancient Gigabyte GA-MA74GM-S2 doesn't support ECC.

    But if it ever gives up the ghost, I'll be getting an ECC capable system and RAM for its replacement.

    I think with die shrinks and multi-core going the way it is, PCI-E storage and RAM scaling into the terabyte ranges, ECC RAM is soon going to be mandatory (like next-gen filesystems are mandatory as we approach petabyte scale storage).

    As long as the OS can see each drive individually (i.e.: not hidden behind some sort of RAID/JBOD system hidden by firmware), then the transport mechanism to the drive doesn't matter.

    FWIW, I've formatted all of the USB backup drives in my house to BtrFS. My wife's business laptop uses 2 of these in backup rotation, and it's been going well.
     
    Last edited: May 22, 2016
  8. digizone

    digizone Member

    Joined:
    Jun 3, 2003
    Messages:
    339
    Location:
    Voyger1 is chasing me
    Elvis, Thank you RockStor looks very interesting. I can't find any comparisons to Unraid. Are you saying that RockStor provides this bare metal access that unraid is missing ?

    I guess I will have a go with this over the next year or so. I'm keen to understand if I can through any larger or smaller HDD in the same way as unraid.

    Rockstor seems new , guess once it develops a bit more in regards to Dockers and VM's it will be great. at about $100 cheaper than unraid and with potentially superior foundations it will steal unraids customer base.
     
  9. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    18,254
    Location:
    Canberra
    Unraid is a software layer that inserts itself between the filesystem and direct access to the drives.

    Without direct access you're relying on Unraid to report faults and fault predictions to BTRFS - when BTRFS is more than capable of handling all of that - and particularly more anally retentive about it (as is ZFS) - to report drive failures and the like.

    The end result is a inconsistent and unpredictable experience - but certainly when ZFS was the only major filesystem pioneering this, everything from healthy drives reporting failures through to failed drives reporting as healthy was seen. Excessive scrub repairs, resilvering that didn't need to happen, etc.

    I personally don't understand how/why unraid became popular. Its a solution to problems that have already been solved - be it in Hardware or Software. But whatever.
     
  10. digizone

    digizone Member

    Joined:
    Jun 3, 2003
    Messages:
    339
    Location:
    Voyger1 is chasing me
    thanx, I think the attraction to unraid, particularly to me, is the ability to through any size drive at it. and also remove any size drive.

    Is this something that RockStor is capable of. The doc's don;t seem too clear , at least to me.

    If the answer is Yes, then I'm pretty sure there will be a big swing from unraid to Rockstor over the next few years.
     
  11. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,505
    Location:
    Brisbane
    BtrFS is a great file server right now for NAS type needs. As a backing store for VMs, it still has some work to do.

    ZFS is certainly a much better option for that right now. If you're hosting VMs on any platform, I recommend ZFS over BtrFS right now.

    BtrFS devs are working on the features that will make exporting block devices and large VM images better. ZFS's "zvol" features are largely missing from BtrFS right now, and that's what makes it a better choice in that specific regard. (Worth noting that this is the same reason why swap-on-BtrFS doesn't work very well at the moment).

    There are project plan ideas in the BtrFS wiki about this, but it's not something we'll see quickly I don't think.

    Me either. FreeNAS is $0, and quite honestly craps all over Unraid for features, performance, and data reliability.

    If I had to guess, I think it would be the fact that Unraid allows people to utilised lower end hardware and upgrade in an ad-hoc fashion, compared to ZFS which requires additional disks to be added in larger numbers, and doesn't like mixing and matching odd disk sizes/speeds as much. If anything, that makes it more appealing to really low end users (SOHO type setups, where even a few hundred bucks on extra disks is hard to scrape together).
     
  12. Alfred14

    Alfred14 Member

    Joined:
    Apr 14, 2013
    Messages:
    21
    The thing that attracted me to unraid was the fact that i could convert my mis-match of drives that i had collected data on over the years (and my collection was ever expanding) and with the addition of 1 parity drive give it some form of protection and also easily expand the size of the array into the future without initially buying alot more space than you currently need to potentially plan for the future.

    I always found that there was SO MUCH information to digest around freenas and in some ways that made it confusing. I'm still not sure what people using freenas do when they need more storage space?
     
  13. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    18,254
    Location:
    Canberra
    Spawn more vdev.
     
  14. Alfred14

    Alfred14 Member

    Joined:
    Apr 14, 2013
    Messages:
    21
    But do you then have to make new shares that connect to the new vdev?
     
  15. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    18,254
    Location:
    Canberra
    Last edited: May 27, 2016
  16. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,505
    Location:
    Brisbane
    Comparing it to legacy technologies:
    * Have 4 disks, make them a RAID5 set (first vdev)
    * Buy 4 more disks, make them another RAID5 set (second vdev)
    * Extend file system over both RAID5 sets transparently.

    The downside is that you have a minimum number of disks you need to buy in order to get effective use of your new space. Again, mostly aimed at larger companies, where buying disks in bunches is easy enough to do.

    BtrFS is improved in this regard. If you want to add a single disk to a RAID5/6 set, you can. Just add the disk and run a "rebalance" on the file system, and it will spread over to the new disk. The downside is that the disk must be the same size or larger than the existing drives, and if it's larger you won't utilised the extra space.

    BtrFS's "single" and "raid1" profiles will use all existing space on mis-matched drives, but the "raid1" system obviously means you get N/2 space out of your disks (less efficient than stripe+parity models).

    In that respect, Unraid is superior for when it comes to utilising the maximum amount of space from a bunch of mismatched disks. However, it doesn't have the realtime disk checksum/verify/scrub options that ZFS and BtrFS have, which puts your data at risk (particularly on new multi-TB drives).
     
  17. Alfred14

    Alfred14 Member

    Joined:
    Apr 14, 2013
    Messages:
    21
    Does all that stuff matter if you don't have ECC memory?

    Im sitting on the fence atm whether i should continue down the unraid path or head over to freenas.
     
    Last edited: May 27, 2016
  18. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    18,254
    Location:
    Canberra
    Yes.

    Because the actual drives themselves will silently corrupt your data overtime. ZFS Scrubs and BTRFS's equivalent will confirm your data is as it was written and repair data that fails that check.
     
  19. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,505
    Location:
    Brisbane
    In short, my data trustworthiness scale goes:

    Single disk < legacy RAID < next-gen FS without ECC < next-gen FS with ECC
     
  20. Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,183
    Location:
    Pacific Ocean off SC
    The problem with FreeNAS is that it's a bitch if you have to fiddle around with VMs all the time. Yeah, you could get another machine to do the virtual machine stuff, but unless you have 10GBe you'd be sucking up valuable network bandwidth just spinning up a handful of VMs.

    Hence why I'm tempted to go for the Ubuntu route for ZFS once I get my act together and ditch Win10 for storage (storage spaces is just too slow unless I go for server 2012 r2 with SSD tiering). Otherwise I might just save my time and just grab a couple of 1TB SSDs in a JBOD.
     

Share This Page

Advertisement: