Overclockers Australia Forums

OCAU News - Wiki - QuickLinks - Pix - Sponsors  

Go Back   Overclockers Australia Forums > Specific Hardware Topics > Storage & Backup

Notices


Sign up for a free OCAU account and this ad will go away!
Search our forums with Google:
Reply
 
Thread Tools
Old 10th January 2017, 7:55 PM   #151
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 27,485
Default

Quote:
Originally Posted by yanman View Post
Anyone playing around with Ceph?
I think that's a bit of a different topic. Clustered file systems like Ceph, Gluster, PVFS, OrangeFS and others are a whole other level of complexity that are outside of the realm of single-system next-gen filesystems on disk.
elvis is offline   Reply With Quote

Join OCAU to remove this ad!
Old 11th January 2017, 4:25 PM   #152
Biel_Tann
Member
 
Join Date: Nov 2004
Posts: 217
Default

Quote:
Originally Posted by NSanity View Post
as discussed, many many times before. Ram matters.

Its absolutely infuriating that all these people come out of the woodwork with their anecdotes that "you don't need ECC ram" with absolutely nothing to back that statement up. The people who wrote this filesystem, the people who are directly connected to developing this filesystem - have stated, time and time again that you need ECC ram.
Matt Ahrens disagrees somewhat - http://arstechnica.com/civis/viewtop...3271#p26303271

- although others disagree with him.

I think the baseline point is if you like your data and your time then pay the extras and get ECC on your RAM for any filesystem, including ZFS.

Last edited by Biel_Tann; 11th January 2017 at 4:28 PM. Reason: link added
Biel_Tann is offline   Reply With Quote
Old 11th January 2017, 5:49 PM   #153
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 27,485
Default

Quote:
Originally Posted by Biel_Tann View Post
Matt Ahrens disagrees somewhat - http://arstechnica.com/civis/viewtop...3271#p26303271

- although others disagree with him.

I think the baseline point is if you like your data and your time then pay the extras and get ECC on your RAM for any filesystem, including ZFS.
I think his basis for the argument was around cost, and if resources are limited, invest in clean power (e.g.: online UPS) before ECC.

I would note, however, that the post linked was from 2014. In early 2017, the cost overhead of ECC is negligible, so it starts to become a little watered down.

Speaking entirely for myself, if it was business/critical data, I would absolutely insist on ECC without exception. As I mentioned before, I'm running non-ECC at home, mostly because I'm a tight-arse, and running ancient hardware. When the day comes to upgrade my home file server, even for my own "unimportant" data, I'll be considering ECC.
elvis is offline   Reply With Quote
Old 11th January 2017, 6:15 PM   #154
MUTMAN
Member
 
MUTMAN's Avatar
 
Join Date: Jun 2001
Location: brisvegas
Posts: 3,907
Default

I dont get that you havent already gone for a kickarse home server considering your job and skills ... ???

HP microserver with a cpu upgrade extra stick of ram 8G (ECC naturally) and an iLo licence....
I dont remeber the total cost, but under $650.
I didnt need the upgrades to be honest, but they sure are nice to have. Its a hell of a home server
MUTMAN is offline   Reply With Quote
Old 11th January 2017, 6:24 PM   #155
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 27,485
Default

Quote:
Originally Posted by MUTMAN View Post
I dont get that you havent already gone for a kickarse home server considering your job and skills ... ???
Because I buy too many arcade machines and retro consoles.
elvis is offline   Reply With Quote
Old 11th January 2017, 6:38 PM   #156
MUTMAN
Member
 
MUTMAN's Avatar
 
Join Date: Jun 2001
Location: brisvegas
Posts: 3,907
Default

Understood. No arguments there
MUTMAN is offline   Reply With Quote
Old 11th January 2017, 6:51 PM   #157
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,382
Default

Quote:
Originally Posted by Biel_Tann View Post
Matt Ahrens disagrees somewhat - http://arstechnica.com/civis/viewtop...3271#p26303271

- although others disagree with him.

I think the baseline point is if you like your data and your time then pay the extras and get ECC on your RAM for any filesystem, including ZFS.
So let's see.

There is literal fucking evidence in this thread that not using ECC can result in data corruption - and yet you're still arguing?

Go run XP already...
NSanity is offline   Reply With Quote
Old 11th January 2017, 7:19 PM   #158
atmo
Member
 
atmo's Avatar
 
Join Date: Apr 2003
Location: Geelong
Posts: 1,056
Default

Quote:
Originally Posted by MUTMAN View Post
HP microserver with a cpu upgrade extra stick of ram 8G (ECC naturally) and an iLo licence....
I dont remeber the total cost, but under $650.
I didnt need the upgrades to be honest, but they sure are nice to have. Its a hell of a home server
An ML10V2 and an extra 16GB of ECC (2x Kingson KVR16E11/8) cost me just on $400 and makes a great Freenas box.
atmo is offline   Reply With Quote
Old 11th January 2017, 10:34 PM   #159
CirCit
Member
 
Join Date: Apr 2002
Posts: 115
Default

Ive skimmed thru this thread looking for storage spaces and refs info.

Nothing seems to touch on what im looking for so ill ask.

I am after a replacement for my 2008r2 software raid.
I want to use 2016 as its got all the new toys(smb3,refs2,latest SS).
I want to ditch the raid as its old hat at this point.
I want to use storage spaces so i can have big logical volumes
I want refs because I'm worried about bitrot (allready have ecc in new box)

now none of those are hard but when I ditch the raid I want to integrate the backup solution to the storage spaces.

since ill have the old slow (microserver n34 non ecc) server free can I get storage spaces to clone the data (with the refs data) to it and then power it off for say a month. fire it back up for a sync. and assuming a scrub on the server fails be aware of the copy and steal it back.

or when a scrub fails does it just fail and you need to replace the file from another source.

also has anyone capacity expanded a 2016 storage spaces logical volume after you add drives. It seemed to fight me when I tried in a vm unless the logical was created as a giant volume in the first place.

TLDR: - can storage spaces sync accross machines and if a refs scrub fails and the other server is offline, queue a resync back when it comes online.
__________________
Mi Goreng Noodle Club
CirCit is offline   Reply With Quote
Old 12th January 2017, 7:36 AM   #160
Biel_Tann
Member
 
Join Date: Nov 2004
Posts: 217
Default

Quote:
Originally Posted by NSanity View Post
So let's see.

There is literal fucking evidence in this thread that not using ECC can result in data corruption - and yet you're still arguing?

Go run XP already...
Calm friend.

I use ECC on my Freenas box because I care about my data. My point was that you said that the people who made it said that ECC was needed. But Ahrens, the cofounder of ZFS, said it isn't when compared with other filesystems, especially with the ZFS_DEBUG_MODIFY flag enabled.

I would suggest that anybody who has a data repository on any filesystem uses ECC RAM unless they don't care about it too much, especially as the costs have come down as Elvis said.
Biel_Tann is offline   Reply With Quote
Old 12th January 2017, 7:57 AM   #161
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 27,485
Default

Quote:
Originally Posted by Biel_Tann View Post
But Ahrens, the cofounder of ZFS, said it isn't when compared with other filesystems, especially with the ZFS_DEBUG_MODIFY flag enabled.
No, he didn't say that at all. He said, and I quote direct from your link:

"ZFS can mitigate this risk to some degree".

That does not equal "you don't need ECC RAM". Risk mitigation does not mean removing risk all together. ("Mitigate" means "to make something less severe" - as in you can mitigate the loss of a leg with a prosthetic, but you still don't have your leg). He then went on to say (and again I quote):

"if you love your data, use ECC RAM"

That's pretty black and white. His comparisons to other file systems is that their integrity checking is no better without ECC than ZFS is without ECC, which is a bit of a "well duh" statement. As has been repeated in this thread, ECC is mandatory if you care about strict data integrity on any next-gen file system. That's not a ZFS-specific statement. That's just a fact about current computer architecture, independent of what's on your hard disk.
elvis is offline   Reply With Quote
Old 12th January 2017, 8:10 AM   #162
Biel_Tann
Member
 
Join Date: Nov 2004
Posts: 217
Default

I'm just gonna post the quote to get rid of ambiguity in what I say as I seem to not be communicating it well....

Quote:
Originally Posted by Matt Ahrens
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.

I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
Biel_Tann is offline   Reply With Quote
Old 12th January 2017, 8:29 AM   #163
Doc-of-FC
Member
 
Doc-of-FC's Avatar
 
Join Date: Aug 2001
Location: Canberra
Posts: 2,538
Default

Fuck ram, people are still unknowingly passing data to non BBU write caches on disks, with 128MB caches these days on large tubs that's insanity right there.

SLOG to your hearts content with SSD's that don't do power loss protection.

It's why I've got an intel DC series SSD as my bcache caching device, cache disabled backing devices all layered with btrfs on luks.

I invested in an E3 xeon a few years back and built an all in one.
Doc-of-FC is offline   Reply With Quote
Old 12th January 2017, 8:30 AM   #164
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,382
Default

Quote:
Originally Posted by CirCit View Post
Ive skimmed thru this thread looking for storage spaces and refs info.

Nothing seems to touch on what im looking for so ill ask.

I am after a replacement for my 2008r2 software raid.
I want to use 2016 as its got all the new toys(smb3,refs2,latest SS).
I want to ditch the raid as its old hat at this point.
I want to use storage spaces so i can have big logical volumes
I want refs because I'm worried about bitrot (allready have ecc in new box)

now none of those are hard but when I ditch the raid I want to integrate the backup solution to the storage spaces.

since ill have the old slow (microserver n34 non ecc) server free can I get storage spaces to clone the data (with the refs data) to it and then power it off for say a month. fire it back up for a sync. and assuming a scrub on the server fails be aware of the copy and steal it back.

or when a scrub fails does it just fail and you need to replace the file from another source.

also has anyone capacity expanded a 2016 storage spaces logical volume after you add drives. It seemed to fight me when I tried in a vm unless the logical was created as a giant volume in the first place.

TLDR: - can storage spaces sync accross machines and if a refs scrub fails and the other server is offline, queue a resync back when it comes online.
Yes.

Use LSI/Avago HBA's + Storage Spaces + ReFS (w/ Integrity streams turned on) + Cluster Storage Volume.

If you want host based expansion, look into Scale Out File Server.

Note: 2012 R2 SOFS is not great (read, don't). 2016 Only.

Make sure your backup applications are ReFS and CSV aware.

No you can't have a cluster be weeks out of sync and heal it self. You would have to have it on predominantly most of the time.

Also replication is *not* a backup. It is an availability feature.

Quote:
Originally Posted by Doc-of-FC View Post
Fuck ram, people are still unknowingly passing data to non BBU write caches on disks, with 128MB caches these days on large tubs that's insanity right there.

SLOG to your hearts content with SSD's that don't do power loss protection.

It's why I've got an intel DC series SSD as my bcache caching device, cache disabled backing devices all layered with btrfs on luks.

I invested in an E3 xeon a few years back and built an all in one.
110%. No PLP/Power-loss protection on write caching devices = fuck my data.

Last edited by NSanity; 12th January 2017 at 8:33 AM.
NSanity is offline   Reply With Quote
Old 12th January 2017, 8:41 AM   #165
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 27,485
Default

Quote:
Originally Posted by Doc-of-FC View Post
Fuck ram, people are still unknowingly passing data to non BBU write caches on disks, with 128MB caches these days on large tubs that's insanity right there.
Agreed. Having said that, the whole point to COW is that you're never left in an inconsistent state, even on instant failure.

Of course, that means that on instant failure, you potentially haven't committed many MB (maybe even GB over many disks) worth of changes to disk. But at least you'll reboot into to a consistent, if somewhat out of date and missing changes, state.
elvis is offline   Reply With Quote
Reply

Bookmarks

Sign up for a free OCAU account and this ad will go away!

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time now is 7:57 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
OCAU is not responsible for the content of individual messages posted by others.
Other content copyright Overclockers Australia.
OCAU is hosted by Micron21!