Overclockers Australia Forums

OCAU News - Wiki - QuickLinks - Pix - Sponsors  

Go Back   Overclockers Australia Forums > Specific Hardware Topics > Storage & Backup

Notices


Sign up for a free OCAU account and this ad will go away!
Search our forums with Google:
Reply
 
Thread Tools
Old 29th May 2016, 3:51 PM   #61
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,813
Default

Quote:
Originally Posted by Smokin Whale View Post
The problem with FreeNAS is that it's a bitch if you have to fiddle around with VMs all the time. Yeah, you could get another machine to do the virtual machine stuff, but unless you have 10GBe you'd be sucking up valuable network bandwidth just spinning up a handful of VMs..
FreeNAS 10 has a hypervisor in it iirc.

Correction - 9.10 has it (which is BSD 10, whatever).
NSanity is offline   Reply With Quote

Join OCAU to remove this ad!
Old 29th May 2016, 4:01 PM   #62
Smokin Whale
Member
 
Smokin Whale's Avatar
 
Join Date: Nov 2006
Location: Pacific Ocean off SC
Posts: 5,139
Default

Quote:
Originally Posted by NSanity View Post
FreeNAS 10 has a hypervisor in it iirc.

Correction - 9.10 has it (which is BSD 10, whatever).
Ah. Didn't know that was a thing now.

http://www.freenas.org/blog/freenas-910-released/

Nice. Whilst I've used FreeNAS with relative ease in the past, BSD isn't really my forte. Bhyve is pretty new though isn't it?
Smokin Whale is offline   Reply With Quote
Old 29th May 2016, 4:02 PM   #63
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,813
Default

Quote:
Originally Posted by Smokin Whale View Post
Ah. Didn't know that was a thing now.

http://www.freenas.org/blog/freenas-910-released/

Nice. Whilst I've used FreeNAS with relative ease in the past, BSD isn't really my forte. Bhyve is pretty new though isn't it?
As far as i could see - yeah (2011) . I know people were clambering for *some* form of virtualisation inside of FreeNAS for some time. I presume the biggest reason its not KVM/oVirt is due to BSD vs GPL bullshit.
NSanity is offline   Reply With Quote
Old 29th May 2016, 4:10 PM   #64
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,122
Default

BSD has had jails since forever.

In fact, that's what Linux's OpenVZ/LXC/Docker were all based on over a decade later.

KVM is specifically built for the Linux kernel. You can run QEmu on KVM but it's slow. I have no idea what the equivalent is for BSD.
elvis is offline   Reply With Quote
Old 29th May 2016, 4:27 PM   #65
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,813
Default

Quote:
Originally Posted by elvis View Post
BSD has had jails since forever.
Jails != Virtuals though.

Its much harder to escape a guest hypervisor than it is a jail. Just ask Apple.
NSanity is offline   Reply With Quote
Old 29th May 2016, 4:31 PM   #66
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,122
Default

Quote:
Originally Posted by NSanity View Post
Jails != Virtuals though.

Its much harder to escape a guest hypervisor than it is a jail. Just ask Apple.
Free/OpenBSD have a pretty good track record.

Apple give no fucks about security.
elvis is offline   Reply With Quote
Old 29th May 2016, 4:35 PM   #67
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,813
Default

Quote:
Originally Posted by elvis View Post
Free/OpenBSD have a pretty good track record.

Apple give no fucks about security.
OpenBSD is run by a literal fanatic thou ;P Makes Linus look tame by all accounts.

Apple indeed gives no fucks about security. Or enterprise.

I mean it does stand to reason though. My understanding is we use Jails for App isolation - but not entirely user isolation. Sidenote: I dislike that just with Virtualisation (particularly early virtualisation), it does seem that there is a shitload of people using it for the wrong reasons - or "just because".
NSanity is offline   Reply With Quote
Old 29th May 2016, 4:39 PM   #68
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,122
Default

Quote:
Originally Posted by NSanity View Post
OpenBSD is run by a literal fanatic thou ;P Makes Linus look tame by all accounts.
A friend of mine works on OpenBSD and travels to Canada once a year to go hiking and climbing with de Raadt.

What you see on mailing lists and in media isn't a true representation of the guy. And again, his track record speaks for itself. We all rely on stuff he's written to do our jobs every day.
elvis is offline   Reply With Quote
Old 31st May 2016, 10:54 AM   #69
Onthax
Member
 
Onthax's Avatar
 
Join Date: Nov 2003
Posts: 418
Default

Actually, dedup in 2012 R2 /2016 is done on the NTFS filesystem layer, not the storage space layer, it is not inline too and has it's own set of limitations

64TB Max volume size
slow space release on delete (scheduled process)
max supported file size 1TB

In the article you mentioned you can see you enable it on the volume.

Doesn't support REFS at this time either.

great for backups tho.



Quote:
Originally Posted by fad View Post
Dedupe is in the Storage space layer not the file system.

The only thing is, it isn't validated for production loads for anything except VDI.
__________________
Steam: Onthax
Diablo 3 : Onthax #6943
Onthax is offline   Reply With Quote
Old 4th June 2016, 10:22 AM   #70
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,122
Default

Quote:
Originally Posted by Onthax View Post
it is not inline too and has it's own set of limitations
When it comes to inline/online dedup versus periodic/offline dedup, I much prefer the latter.

Dedup takes a lot of RAM to manage inline, and invariably misses a lot of dedupe opportunities if the system has been restarted recently.

Scheduling a crawl over random parts of the file system for a given time is a feature I'd like to see in new storage solutions. Where I work, we get a pretty good window of downtime from about Sunday evening through to Monday 9:00am. I'd love to be able to schedule some sort of random crawl+dedup for that many hours every week, and then it stops running when production ramps up.

The "dupremove" project that BtrFS recommends is an offline/scheduled approach to deduplication, and can use an SQLite3 database instead of RAM if you've got a big data store and not much RAM (point it at a spare SSD not in the same storage pool and it's not a problem).

Once blocks have been deduped, the benefits of that stick around - get more cache efficiency (no need to cache identical blocks twice), the space is saved, and you're not wasting tonnes of RAM that could be speedy cache as a store for your block hash map.

On dedicated ZFS arrays if I'm copying mass data to them for the fist time (and I don't have to have the storage online quickly, which is rare because nobody plans anything), I'll turn on dedup, copy the data across, and then turn off dedup and reboot the unit to allow ARC to pick up the RAM as cache, but still keep the benefits of the initial dedup.

Those benefits obviously erode over time, as subsequent access and copies aren't deduped. ZFS not offering an offline dedup is one of it's downsides.

Last edited by elvis; 4th June 2016 at 10:25 AM.
elvis is offline   Reply With Quote
Old 15th June 2016, 4:32 PM   #71
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,122
Default

Quote:
Originally Posted by elvis View Post
MacOSX users, HFS+ is simply the second worst file system in existence today:
https://blog.barthe.ph/2014/06/10/hfs-plus-bit-rot/
Apple have announced at this year's WWDC that APFS will replace the horrid HFS+, and bring with it many next-gen filesystem benefits:

Apple's developer docs:
https://developer.apple.com/library/...roduction.html

Good write up at ArsTechnica:
http://arstechnica.com/apple/2016/06...w-file-system/

* Copy on Write (which they call "Crash Protection", which is a bit silly, but whatever)
* Nanosecond timestamps (mandatory for modern computers, whereas HFS+ can only do 1-second accuracy timestamps)
* Block level snapshots
* Filesystem level encryption (currently FileVault on HFS+ uses loopback files to do this, which is clunky).
* TRIM, IO coalescing and queue depth optimisations for better SSD performance
* "Copy" operations will use built in reflink/clone operations to be faster and not waste space
* Container/volume management built in
* Quotas and thin provisioning (which Apple call "space sharing")
* 64bit inodes
* Sparse file support
* RAID0 (stripe), RAID1 (mirror) and JBOD (span) modes available. No details yet on the specifics (if they can be layered, or if RAID1 is always 2 copies like BtrFS, or up to N disks like other systems).

The big missing feature for me is block-level checksumming, which hasn't been mentioned anywhere. I'm not sure at this stage if it's not part of the design, or just remains unmentioned, but that needs to be implemented at minimum, in my opinion. Compression is also missing, which is less of a concern, but is very nice to have.

The current beta code doesn't yet support case-insensitive operations (not a bad thing if you ask me, but I'm oldschool POSIX), and can't be shared via AFP (again, not a bad thing). That also means it can't be used for Time Machine backups over network. It also can't be installed on the main/boot partition.

This is available in macOS 10.12 Sierra and up only. No backward compatibility for 10.11 Yosemite and older announced.

Many folks are asking why they didn't use ZFS, BtrFS, HAMMER or other open source file systems. Cynically, I think Apple suffer from Not Invented Here Syndrome too frequently. But worth noting that they are targeting this at iOS, tvOS and watchOS as well (which to be fair are just marketing names for the same bits of core software). I'd dare say the goal is to keep memory requirements and IO way down, which may mean sacrificing some of the features ZFS/BtrFS will offer.
elvis is offline   Reply With Quote
Old 21st June 2016, 12:06 PM   #72
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,122
Default

So far APFS design concepts are disappointing.

From Adam Leventhal's (ex Sun developer, DTrace co-author, and all around clever dude) blog: http://dtrace.org/blogs/ahl/2016/06/...rt5/#apfs-data

Quote:
Explicitly not checksumming user data is a little more interesting. The APFS engineers I talked to cited strong ECC protection within Apple storage devices. Both flash SSDs and magnetic media HDDs use redundant data to detect and correct errors. The engineers contend that Apple devices basically don’t return bogus data. NAND uses extra data, e.g. 128 bytes per 4KB page, so that errors can be corrected and detected. (For reference, ZFS uses a fixed size 32 byte checksum for blocks ranging from 512 bytes to megabytes. That’s small by comparison, but bear in mind that the SSD’s ECC is required for the expected analog variances within the media.) The devices have a bit error rate that’s high enough to expect no errors over the device’s lifetime. In addition there are other sources of device errors where a file system’s redundant check could be invaluable. SSDs have a multitude of components, and in volume consumer products they rarely contain end-to-end ECC protection leaving the possibility of data being corrupted in transit. Further, their complex firmware can (does) contain bugs that can result in data loss.
It appears Apple are at this point specifically not implementing checksums into APFS. IMHO, that removes APFS as a true "next gen filesystem" if it relies on third party hardware/firmware to implement data integrity checks.

Adam goes on further to note:

Quote:
The Apple folks were quite interested in my experience with regard to bit rot (aging data silently losing integrity) and other device errors. I’ve seen many instances where devices raised no error but ZFS (correctly) detected corrupted data. Apple has some of the most stringent device qualification tests for its vendors; I trust that they really do procure the best components. Apple engineers I spoke with claimed that bit rot was not a problem for users of their devices, but if your software can’t detect errors then you have no idea how your devices really perform in the field. ZFS has found data corruption on multi-million dollar storage arrays; I would be surprised if it didn’t find errors coming from TLC (i.e. the cheapest) NAND chips in some of Apple’s devices. Recall the (fairly) recent brouhaha regarding storage problems in the high capacity iPhone 6. At least some of Apple’s devices have been imperfect.

As someone who has data he cares about on a Mac, who has seen data lost from HFS, and who knows that even expensive, enterprise-grade equipment can lose data, I would gladly sacrifice 16 bytes per 4KB–less than 1% of my device’s size.
I couldn't agree more. A very disappointing viewpoint from Apple engineers so far. I sincerely hope they come around to understanding exactly why this matters.

Last edited by elvis; 21st June 2016 at 12:10 PM.
elvis is offline   Reply With Quote
Old 21st June 2016, 1:14 PM   #73
Aetherone
Member
 
Aetherone's Avatar
 
Join Date: Jan 2002
Location: Adelaide, SA
Posts: 8,357
Default

Quote:
Originally Posted by elvis View Post
A very disappointing viewpoint from Apple engineers.
Apple. If we didn't steal invent it, its crap. If we did, its perfection incarnate.
<fingers_in_ears>LALALALALALALALA</fingers>
Aetherone is offline   Reply With Quote
Old 21st June 2016, 2:14 PM   #74
shadowman
Member
 
shadowman's Avatar
 
Join Date: Aug 2003
Location: Perth
Posts: 2,715
Default

I love this bit:

Quote:
The engineers contend that Apple devices basically donít return bogus data
Lol, alright then. They are saying apple devices never produce erroneous data? Bullcrap.
__________________
"Icecream is gonna save the day" - Muscles
shadowman is offline   Reply With Quote
Old 21st June 2016, 2:36 PM   #75
theSeekerr
Member
 
theSeekerr's Avatar
 
Join Date: Jan 2010
Location: Prospect SA
Posts: 2,385
Default

Quote:
Originally Posted by shadowman View Post
Lol, alright then. They are saying apple devices never produce erroneous data? Bullcrap.
This perfection of the hardware, of course, is why HFS+ has always behaved completely perfectly and doesn't need replacement....oh wait
__________________
Lucis+Umbra Blog - Photography by c.j. kerr
theSeekerr is offline   Reply With Quote
Reply

Bookmarks

Sign up for a free OCAU account and this ad will go away!

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time now is 11:30 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
OCAU is not responsible for the content of individual messages posted by others.
Other content copyright Overclockers Australia.
OCAU is hosted by Micron21!