TBH I've no idea, zero care level for Apple anything. I just saw it listed as 'new' on the list. maybe it's not really new, or there's another codebase option for ZFS on mac.
I tested ZFS on Mac last some time around 2017 I think? If you turn of case sensitivity in the ZFS options it worked pretty well. Performance was all there. I did some very basic load testing on it, and things worked well. Tried the usual trashing of devices and recovering, all good. Passed it over to production, and they just freaked out. "Too hard, what are these command line options, ZOMG it's not HFS+ we can't even, how do we expect unpaid film interns to deal with this???". Fuck yeah, gotta love the media industry.
fucken, why would anyone choose hfs over anything i'd rather use reiserFS with an infinite supply of wives to kill instead.
"But this is the way we've always done it." To which I reply with: "Yes, that's why when my kids get sick, I bleed them with leeches instead of sending them to a doctor".
<insert best Samuel L. Jackson voice> Suck my what ZFS!!! Am I doing this right, or did I just get pwned by ZFS? After thinking about for more than a second it’s the equivalent of raid6 not raid5, so that now works out doesn’t it. It might be time to drag my flu ridden ass out of bed and get coffee I think.
yeah Z2 is double parity , similar to, but not RAID6 Z3 is triple parity. for only 4 drives, you go Z1 single parity (RAID5) less protection/more space, or a mirror which will have the same protection and available space but be faster performance wise. you can alternatively (or in addition to the above), add extra protection on a volume basis. Say pick Z1, for the extra space, but have your volumes split up with the 'copies' tuneable set to your required level of resilience. /pool/important_stuff copies=2 /pool/torrentslinux_isos (copies=1 default) the important stuff will then have two copies of everything saved on disk, so you get extra protection, while not having to double up on all the re-downloadable stuff. ZFS is smart enough to ensure the extra copies are not stored on the same physical disk.
record size, is a tuneable per volume as well (not per pool as that calculator implies). depending on what you're storing, lots of tiny files = smaller record size (=less wasted space), mostly large files = larger record size (more performance, less overheads/metadata storage). I'll typically run 1MB record size for video/audio pools. 128KB for general file/document storage. and if you're going to be running a volume with say VM disk image files, pick a record size that matches the image file sector/cluster/page sizes.
That's lots of good FS info not to mention the management strategies - cheers. I've lived quite protected by hardware & mdadm raid.
Yeah, modern file systems aren't set and forget like FAT and NTFS. They can be, but you miss out on the best bits. The ZFS defaults all work just fine. You also want to set the 'ashift' tuneable at pool creation (it's immutable once the pool is created) to properly align sectors important for SSDs and 4kn HDDs.
Bcachefs is still being developed slowly. It's a one-man-band currently. It started as a cache to sit in front of other file systems, but the developer's dissatisfaction with that approach has lead it to be a full file system all of it's own. It's 5 years into currently, but progressing. This week the dev posted updates about his approach to reed-solomon encoding (similar in nature to how RAID5/6 works, but slightly more flexible). What's exciting about his approach is that it avoids the RAID5 write hole problem (which ZFS does already, and BtrFS does not), however it doesn't fragment data on an updated stripe, which ZFS does. This leads to less file system fragmentation long term, especially on large, busy volumes. The dev mostly updates via Patreon, and the file system isn't anywhere near production ready yet. But it is heading towards a nice trajectory of having a merge of the best ZFS and BtrFS features with fewer of the the limitations of either. https://www.patreon.com/posts/erasure-coding-22703995 I often hear people ask what the point of caching or N-tier read/write cache in modern filesystems is any more with flash on target to match spindle for dollars per TB. But it's worth remembering that there will *always* be performance tiers in anything, and even now we're seeing the rise of "large but slow" versus "small but fast' flash (and I'm not talking SATA vs NVME, but rather the flash cells themselves). ZFS ZIL/SLOG and bcachefs' native tiered caching will still have a lot to offer in years to come.
BTRFS users - what are you using for monitoring/notifications and maintenance? I'm rebuilding my home server, which I was previously monitoring with some cobbled together scripts to pull errors from btrfs device stats and dmesg, but I'm wondering if I'm missing a more comprehensive/robust solution. Surely pretty much everyone running BTRFS wants to get an email if their drives start throwing errors?
For work I write my own scripts. For home, any array that can't meet its minimum requirements will go read-only. My RAID1 setup just pauses all writes if a drive goes offline, which is enough to trigger one of my family members to yell at me. There's new stuff in Linux 5.10 and up, I believe, that's exposing more BtrFS stuff to userspace, which will make reporting and alerting easier.
since this thread covers multiple file systems, for reference the ZFS way of handling events is 'zed' the ZFS Event Daemon. Out of the box, comes with a bunch of scripts to handle the events you want to know about (drive failures of course, but scrub notifications, pool imports, config changes etc. you can even set up events on data changes - if you're sadistic and looove reading logs I guess), supports setting the error lights on drive arrays etc. comes with email support naturally (using whatever mail software you have on the system), syslog, and also includes support for slack and pushbullet notifications. of course being script based you can write in support for whatever else you may want. I use 'Pushover' for notifications, it's a simple web based API so it was easy to add in support for that. defaults are sane, syslog and email 'root' for health related events. so if you already have email forwarding/syslog monitoring configured there's no config to do.
I used this script here, works https://gist.github.com/petervanderdoes/bd6660302404ed5b094d DOH u meant btrfs, oh well
i just send 'btrfs dev stats' to my email once a day lol, and the last scrub result appended on the end