Overclockers Australia Forums

OCAU News - Wiki - QuickLinks - Pix - Sponsors  

Go Back   Overclockers Australia Forums > Specific Hardware Topics > Storage & Backup

Notices


Sign up for a free OCAU account and this ad will go away!
Search our forums with Google:
Reply
 
Thread Tools
Old 1st May 2017, 3:33 PM   #286
Doc-of-FC
Member
 
Doc-of-FC's Avatar
 
Join Date: Aug 2001
Location: Canberra
Posts: 2,618
Default

Quote:
Originally Posted by Quadbox View Post
There's a few major patches for btrfs raid5/6 lined up for linux 4.12, finally. A fix for the scrub data-loss bug and a few other things

8 months from bug mailing list to stable kernel, ouch!, refer to my earlier post referencing the bug: https://www.mail-archive.com/linux-b.../msg55161.html
Doc-of-FC is offline   Reply With Quote

Join OCAU to remove this ad!
Old 1st May 2017, 6:46 PM   #287
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,712
Default

Quote:
Originally Posted by Doc-of-FC View Post
8 months from bug mailing list to stable kernel, ouch!, refer to my earlier post referencing the bug: https://www.mail-archive.com/linux-b.../msg55161.html
BtrFS dev is pretty interesting. There certainly doesn't seem to be enormous western corporate support for it. Most of the dev work comes from Chinese email addresses.

Facebook themselves were the most prominent western sponsor of the project, but they have no interest in RAID5/6 (they rely on redundancy between nodes, not on an individual host).

All of that combined results in a bit of a bummer. ZFS is the current king of filesystems, no doubt. But it has prominent design flaws too (as does everything). BtrFS goes part of the way to solving some of these. But in particular their parity-based RAID setups have been lazily completed, and are missing some really good ideas.

I'd like to see the RAID5/6 write hole plugged, and the ability to choose data-to-parity ratios to enable not only different sized devices in RAID5/6 setups, but also not force fixed sizes of RAID volumes based on your day-1 install size. BtrFS already has a rebalance option that solves the latter, but misses a vital step in the middle of choosing ratios trivially and being managed at the filesystem level (instead of managed manually by the person setting it up).

A great opportunity missed, and entirely the result of no real co-ordinated development effort.
__________________
Play old games with me!
elvis is offline   Reply With Quote
Old 2nd May 2017, 7:13 AM   #288
Doc-of-FC
Member
 
Doc-of-FC's Avatar
 
Join Date: Aug 2001
Location: Canberra
Posts: 2,618
Default

Quote:
Originally Posted by elvis View Post
BtrFS dev is pretty interesting. There certainly doesn't seem to be enormous western corporate support for it. Most of the dev work comes from Chinese email addresses.
Most of the development work I've found has been coming from [Fujitsu / China]

2014 linux con europe: https://www.fujitsu.com/jp/documents...4-takeuchi.pdf

I've got another ZFS array to build in the next few weeks, haven't found anything to replace it yet.

although I'm actively watching bcachefs still. The only thing that might rival ZFS and will probably kill off BTRFS.

http://bcachefs.org/
https://www.patreon.com/bcachefs
Doc-of-FC is offline   Reply With Quote
Old 2nd May 2017, 8:11 AM   #289
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,712
Default

Quote:
Originally Posted by Doc-of-FC View Post
although I'm actively watching bcachefs still. The only thing that might rival ZFS and will probably kill off BTRFS.

http://bcachefs.org/
https://www.patreon.com/bcachefs
I've been keeping an eye on that. Interesting story - it started as a "cache only" system, and the dev got sick of everyone else doing stupid things in filesystem land, so he made a complete filesystem.

From what I can tell, erasure coding / reed-solomon isn't done yet (potentially not even started). I'm hoping that my wishlist above is done, and you can choose your ratios right down to the device layer.
i.e.: tell the file system you want a, say, 3:1 ratio of data to parity, and it just keeps that ratio per extent group reguardless of how many physical disks there are.

The upside is you could constantly add mis-matched drives, and still get the benefits of a RAID5 style setup, but not be constrained by the smallest device. Likewise it means losing a drive just means rebuilding the affected data, and not having to go across the entire file system like legacy RAID setups.

Setting up a patreon is great idea too. In this day and age, developer time is the shortage. The old method of waiting for corporates to fund full positions is a bit moot, now that we have these crowd-funding solutions for every other stupid thing on the planet. A good quality file system is worthy of a few bucks.
__________________
Play old games with me!
elvis is offline   Reply With Quote
Old 10th May 2017, 11:19 PM   #290
Doc-of-FC
Member
 
Doc-of-FC's Avatar
 
Join Date: Aug 2001
Location: Canberra
Posts: 2,618
Default

Code:
                                           capacity     operations    bandwidth		
pool                                    alloc   free   read  write   read  write	
--------------------------------------  -----  -----  -----  -----  -----  -----	
DS0                                     52.4G  43.4T      0  2.21K      0   100M	
  mirror                                6.53G  5.43T      0    125      0  7.57M	
    gptid/shelf0-slot0			    -      -      0     69      0  7.58M	
    gptid/shelf0-slot8			    -      -      0     69      0  7.58M	
    gptid/shelf0-slot16			    -      -      0     70      0  7.58M	
  mirror                                6.54G  5.43T      0    119      0  7.49M	
    gptid/shelf0-slot1			    -      -      0     64      0  7.50M	
    gptid/shelf0-slot9			    -      -      0     64      0  7.50M	
    gptid/shelf0-slot17			    -      -      0     64      0  7.50M	
  mirror                                6.53G  5.43T      0    122      0  7.44M	
    gptid/shelf0-slot2			    -      -      0     64      0  7.45M	
    gptid/shelf0-slot10			    -      -      0     64      0  7.45M	
    gptid/shelf0-slot18			    -      -      0     64      0  7.45M	
  mirror                                6.58G  5.43T      0    119      0  7.49M	
    gptid/shelf0-slot3			    -      -      0     60      0  7.49M	
    gptid/shelf0-slot11			    -      -      0     60      0  7.49M	
    gptid/shelf0-slot19			    -      -      0     60      0  7.49M	
  mirror                                6.55G  5.43T      0    216      0  8.63M	
    gptid/shelf0-slot4			    -      -      0     96      0  8.63M	
    gptid/shelf0-slot12			    -      -      0     97      0  8.63M	
    gptid/shelf0-slot20			    -      -      0     97      0  8.63M	
  mirror                                6.53G  5.43T      0    197      0  7.87M	
    gptid/shelf0-slot5			    -      -      0     88      0  7.87M	
    gptid/shelf0-slot13			    -      -      0     90      0  7.87M	
    gptid/shelf0-slot21 		    -      -      0     89      0  7.87M	
  mirror                                6.52G  5.43T      0    178      0  7.72M	
    gptid/shelf0-slot6			    -      -      0     87      0  7.72M	
    gptid/shelf0-slot14			    -      -      0     87      0  7.72M	
    gptid/shelf0-slot22			    -      -      0     87      0  7.72M	
  mirror                                6.60G  5.43T      0    126      0  7.59M	
    gptid/shelf0-slot7			    -      -      0     66      0  7.59M	
    gptid/shelf0-slot15			    -      -      0     66      0  7.59M	
    gptid/shelf0-slot23			    -      -      0     66      0  7.59M	
logs                                        -      -      -      -      -      -	
  mirror                                 462M  22.8G      0  1.03K      0  38.4M	
    gptid/head0-slot2			    -      -      0  1.03K      0  38.4M	
    gptid/head0-slot3			    -      -      0  1.03K      0  38.4M	
cache                                       -      -      -      -      -      -	
  gptid/head0-slot4			41.5G   406G      0    179      0  21.9M	
--------------------------------------  -----  -----  -----  -----  -----  -----
It begins, not long to 50% utilisation

lots and lots of random.

R610, dual E5640, 192GB ram
3 x 480 GB Intel DCS3520, 2 x resized for SLOG (ISDCT)
24 x 6TB WD red pro in 3-way mirror
Doc-of-FC is offline   Reply With Quote
Old 11th May 2017, 7:25 AM   #291
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,763
Default

I used to use the 200GB 37xx's resized to 16 or 32GB.

Better unbuffered 4k's and longevity than the 35xx's.

But nice stuff :P
NSanity is online now   Reply With Quote
Old 12th May 2017, 3:49 PM   #292
Doc-of-FC
Member
 
Doc-of-FC's Avatar
 
Join Date: Aug 2001
Location: Canberra
Posts: 2,618
Default

linear 64k read:


random 64k read (1 thread):


random 64k read (4 thread):
Doc-of-FC is offline   Reply With Quote
Old 12th May 2017, 3:51 PM   #293
davros123
Member
 
Join Date: Jun 2008
Posts: 2,720
Default

Nice.

Noe, make it go like a cylon
__________________
Want a nas, you may find my Esxi/Solaris ZFS NAS build thread of interest.
Quote:
Originally Posted by Stanza View Post
yeah well I just reported my own post...ferk....
Quote:
Originally Posted by Blinky View Post
If you have become content with the size of your e-penis, sticking clear of rack mounted stuff will save you heaps of $$$.
davros123 is online now   Reply With Quote
Old 12th May 2017, 8:47 PM   #294
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,763
Default

Quote:
Originally Posted by davros123 View Post
Nice.

Noe, make it go like a cylon
now do a skid
NSanity is online now   Reply With Quote
Old 14th May 2017, 9:09 PM   #295
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,712
Default

Turn your speakers DOWN

6x Gluster nodes set up in mirrored pairs that are then striped over. You can see the IO bouncing around between the mirrored pairs.

__________________
Play old games with me!
elvis is offline   Reply With Quote
Old 15th May 2017, 8:04 PM   #296
ae00711
Member
 
ae00711's Avatar
 
Join Date: Apr 2013
Posts: 1,004
Default

Quote:
Originally Posted by elvis View Post
Turn your speakers DOWN

6x Gluster nodes set up in mirrored pairs that are then striped over. You can see the IO bouncing around between the mirrored pairs.

hd movie
that is.......beautiful
__________________
I LOVE CHING LIU
SMOKING IS UN-AUSTRALIAN
I prefer email to PM!

ae00711 is offline   Reply With Quote
Old 16th May 2017, 8:07 AM   #297
Aetherone
Member
 
Aetherone's Avatar
 
Join Date: Jan 2002
Location: Adelaide, SA
Posts: 8,379
Default

Quote:
Originally Posted by elvis View Post
Needs a proper soundtrack. I think this

https://youtu.be/NjxNnqTcHhg?t=30
Aetherone is offline   Reply With Quote
Reply

Bookmarks

Sign up for a free OCAU account and this ad will go away!

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time now is 1:53 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
OCAU is not responsible for the content of individual messages posted by others.
Other content copyright Overclockers Australia.
OCAU is hosted by Micron21!