Overclockers Australia Forums

OCAU News - Wiki - QuickLinks - Pix - Sponsors  

Go Back   Overclockers Australia Forums > Specific Hardware Topics > Storage & Backup

Notices


Sign up for a free OCAU account and this ad will go away!
Search our forums with Google:
Reply
 
Thread Tools
Old 21st May 2016, 12:00 PM   #16
Smokin Whale
Member
 
Smokin Whale's Avatar
 
Join Date: Nov 2006
Location: Pacific Ocean off SC
Posts: 5,132
Default

Quote:
Originally Posted by NSanity View Post
tbh - i'm not seeing any real difference between ReFS and NTFS for Exchange Mail / Log Stores.

Just let Storage Spaces tier the storage.
What do you mean by "tier the storage"? I'm using Windows 10 Pro with storage spaces parity pools with 6 2TB HDDs in my file server at the moment and the performance just isn't great. Regularly drops down to 50MB/s or so, and even lower if a lot of 4k is involved. Read speed is good though.
Smokin Whale is offline   Reply With Quote

Join OCAU to remove this ad!
Old 21st May 2016, 12:10 PM   #17
chip
Member
 
Join Date: Dec 2001
Location: Perth
Posts: 3,315
Default

from my own fiddling around with 10 spindles for 2012r2 storage spaces, parity without loads of SSD caching just blows for writes compared to even RAID 10 (ie using storage spaces to set up mirrors, and then diskmgmt.msc to stripe across them).

Last edited by chip; 21st May 2016 at 12:12 PM.
chip is offline   Reply With Quote
Old 21st May 2016, 12:12 PM   #18
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,782
Default

Quote:
Originally Posted by chip View Post
parity just blows for writes compared to even RAID 10
ummm. that will always be the case... Raid 0/1/10 doesn't have to calc parity.

You say loads of SSD Caching. For 4-5 VM's (my test bench) - i saw no real difference between 1GB and 5GB of WBC.

Quote:
Originally Posted by Smokin Whale View Post
What do you mean by "tier the storage"? I'm using Windows 10 Pro with storage spaces parity pools with 6 2TB HDDs in my file server at the moment and the performance just isn't great. Regularly drops down to 50MB/s or so, and even lower if a lot of 4k is involved. Read speed is good though.
My info is based on server 2012 r2. When you create the space - you can add different types of disk to it - when you add SSD's and HDD's to it, you get the option to tier your storage.

Last edited by NSanity; 21st May 2016 at 12:16 PM.
NSanity is online now   Reply With Quote
Old 21st May 2016, 12:19 PM   #19
chip
Member
 
Join Date: Dec 2001
Location: Perth
Posts: 3,315
Default

Quote:
Originally Posted by NSanity View Post
ummm. that will always be the case... Raid 0/1/10 doesn't have to calc parity.
Never claimed they did. The parity performance hit I encountered with SS was greater than with hardware RAID controllers, that's all.
chip is offline   Reply With Quote
Old 21st May 2016, 12:19 PM   #20
Smokin Whale
Member
 
Smokin Whale's Avatar
 
Join Date: Nov 2006
Location: Pacific Ocean off SC
Posts: 5,132
Default

Quote:
Originally Posted by NSanity View Post
ummm. that will always be the case... Raid 0/1/10 doesn't have to calc parity.

You say loads of SSD Caching. For 4-5 VM's (my test bench) - i saw no real difference between 1GB and 5GB of WBC.



My info is based on server 2012 r2. When you create the space - you can add different types of disk to it - when you add SSD's and HDD's to it, you get the option to tier your storage.
Yeah okay, fair enough. I know that Win10 can't do SSD caching on storage spaces. I'd rather go unraid / ZFS than go for Server 2012 R2, too damn pricey for me.
Smokin Whale is offline   Reply With Quote
Old 21st May 2016, 12:42 PM   #21
Diode
Member
 
Diode's Avatar
 
Join Date: Jun 2011
Location: Melbourne
Posts: 1,546
Default

Fantastic... out of 29,000 photos found 257 with miss matched hashes. Fun times going through and manually opening each photo and checking which ones are not damaged.
Diode is offline   Reply With Quote
Old 21st May 2016, 12:42 PM   #22
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,698
Default

Quote:
Originally Posted by NSanity View Post
ReFS is thin - however given that SQL 2016 and Exchange 2016 are now recommending their "best practice" implementation to be placed on the filesystem - expect more to come from it soon.
Well that is good news.

Cynically though, I've been burned too many times by Microsoft telling me something is best practice this year, only to tell me that it's deprecated the year after (and then best practice the next year again).

But hopefully ReFS sticks. NTFS is rapidly becoming outdated, and it needs to go.

ReFS is missing a lot from what I read. No dedup, no block level compression (NTFS had file level compression, but it killed performance quite a bit), and a bunch of NTFS features like alternate data streams aren't there yet.

Still, the fact that larger applications are recommending it is a good sign for the future. I also hope Microsoft realise there's a requirement for it on desktops and workstations too, and knock of this silly idea of "server only" features like sensible file systems.

Quote:
Originally Posted by Diode View Post
Fantastic... out of 29,000 photos found 257 with miss matched hashes. Fun times going through and manually opening each photo and checking which ones are not damaged.
Yup, precisely why we need these new file systems.

As our storage gets bigger, we're holding more precious information per device. Physical and logical errors are going to affect us in these sorts of ways without sensible file systems to aid us.

I'm even finding BtrFS on USB drives to be useful, as I can tell it to write two copies of everything transparently. I set both the data and metadata levels to "dup", and then write to a single USB drive. It halves the effective space, but when I'm buying 1 and 2 TB 2.5" USB spindle drives for backing up home systems on the cheap, I've still got heaps of space.

Last edited by elvis; 21st May 2016 at 12:46 PM.
elvis is offline   Reply With Quote
Old 21st May 2016, 3:52 PM   #23
digizone
Member
 
Join Date: Jun 2003
Location: Voyger1 is chasing me
Posts: 339
Default

BtrFS is running flawless on UnRaid amazing stuff, Double parity and any hdd.

Last edited by digizone; 21st May 2016 at 3:58 PM.
digizone is online now   Reply With Quote
Old 21st May 2016, 4:39 PM   #24
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,698
Default

Quote:
Originally Posted by digizone View Post
BtrFS is running flawless on UnRaid amazing stuff, Double parity and any hdd.
Unfortunately unRAID suffers all the problems of traditional RAID, including the write hole problem as well as other silent corruptions.

Running BtrFS on top of this is better than other file systems, however there's a fundamental flaw in not allowing BtrFS access right down to the bare device, with unRAID presenting a virtual device to BtrFS.

Like other parity-based RAID systems, unRAID users are recommended to run a UPS on their storage to prevent silent errors, and even still this won't protect against all possible issues (hard system crash or motherboard fault which could cause silent data corruption even with a UPS).

It's important that people understand what ZFS and BtrFS aim to achieve, and why they need "bare metal" access to hard drives, without another RAID system in between the file system and the hard disks.

I'm not picking on unRAID here either. Linux MDRAID, LVM, hardware RAID devices and other RAID systems all suffer the same problem, and layering BtrFS on top of those won't solve that.

If you want an easy to use NAS with BtrFS, look at RockStor instead (community edition download is open source and dollar-free).

Last edited by elvis; 21st May 2016 at 4:42 PM.
elvis is offline   Reply With Quote
Old 21st May 2016, 5:00 PM   #25
fad
Member
 
fad's Avatar
 
Join Date: Jun 2001
Location: City, Canberra, Australia
Posts: 2,046
Default

Dedupe is in the Storage space layer not the file system.

The only thing is, it isn't validated for production loads for anything except VDI.

It's also after the fact dedupe unlike ZFS. I think they were trying to prevent it needing alot of memory. What the ZFS formula? 5Gb per TB of data?

https://blogs.technet.microsoft.com/...deduplication/

The biggest issue with either ZFS or ReFS is getting hardware support. Most the main vendors have RAID cards with old storage methods. They will need to be brought over to the NextGen FS kicking and screaming. Look at the validated servers for EMC vSAN or MS SOFS. Both have kit lists from the major vendors with very high costs. They just need to change the HBA over.

A Dell MD1200 direct attached disk unit isn't validated for connecting to anything other than a Dell H810 external LSI based IR card.

Im a big fan of both (ReFS/ZFS). I run both at home for storage.
__________________
WTB : Dell R720 Server for parts
WTB :Intel LGA 2011 Xeon 26xx CPUs
fad is offline   Reply With Quote
Old 21st May 2016, 5:03 PM   #26
fad
Member
 
fad's Avatar
 
Join Date: Jun 2001
Location: City, Canberra, Australia
Posts: 2,046
Default

Quote:
Originally Posted by elvis View Post
Cynically though, I've been burned too many times by Microsoft telling me something is best practice this year, only to tell me that it's deprecated the year after (and then best practice the next year again).
Yeah this year 2012 R2 was SOFS, with shared direct attached dual port SAS hardware arrays. Which made the cost of share SSDs expensive.

As soon as 2016 hits, it will be non shared hardware, with SATA local storage only, with a shared network layer doing the copies.

Quote:
Originally Posted by NSanity View Post
You say loads of SSD Caching. For 4-5 VM's (my test bench) - i saw no real difference between 1GB and 5GB of WBC.
.
I found the difference to be quite big. However I am allocating 4x128Gb SSDs entirely to WBC. My understanding is, all you are doing is putting off the writes till you have time to service the required IO. If you never have time you will not see much difference. I found copying array to array to be really painful. Now the data is there, the VDI performance is good. (8x4Tb 7.2k,4x128Gb SSD, 200-300mb/sec at 75-100k IOP) 4k random 25% write 75% read. Its the standard IO profile from IOmeter.
__________________
WTB : Dell R720 Server for parts
WTB :Intel LGA 2011 Xeon 26xx CPUs

Last edited by fad; 21st May 2016 at 5:12 PM.
fad is offline   Reply With Quote
Old 21st May 2016, 5:19 PM   #27
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,698
Default

Quote:
Originally Posted by fad View Post
It's also after the fact dedupe unlike ZFS. I think they were trying to prevent it needing alot of memory. What the ZFS formula? 5Gb per TB of data?
Yes, realtime/online deduplication needs to keep a list of block checksums in memory so that the system can dedupe in realtime. Obviously this gets VERY big over time, and I'm not a fan of it at all.

I'd much rather a background services that constantly crawls the file system, looking for duplicate blocks in a random pattern up to a fixed memory utilisation, replacing them with reflinks. It won't be nearly as efficient, but it would use far less memory than either realtime dedup or a complete file system crawl one-shot.

Quote:
Originally Posted by fad View Post
The biggest issue with either ZFS or ReFS is getting hardware support. Most the main vendors have RAID cards with old storage methods. They will need to be brought over to the NextGen FS kicking and screaming.
I'm not so sure about this. I'm already seeing a lot of vendors very happy to change over to IT-mode storage. We buy a lot of Supermicro gear, and pretty much every single NAS unit Supermicro ship can now be purchased in either RAID mode or IT mode. They've figured out they can ship a lot of systems and spindles, and at a decent price (as it's all ECC spec gear with decent drives), make a good profit and still come out well under the price of some of the big-name storage vendors.

For home/SOHO users, there's also some interesting possibilities with hardware like this:

http://addonics.com/

Using simple eSATA-III connectivity and port-multiplier compatible cards, you can attach a pretty decent volume of disks to an existing system without a heck of a lot of complex hardware or cost. The end result is some great scalable storage hardware for home users who don't need 10Gbit/s+ speeds.

I've currently just got a crappy old tower system full of drives as my storage system, but really like the look of some of Addonics 6G "RAID towers":

http://addonics.com/category/6grt.php

Plug them straight into a compatible eSATA port, and you see all the drives direct over the one cable, ready for BtrFS and ZFS to use direct.
elvis is offline   Reply With Quote
Old 21st May 2016, 5:28 PM   #28
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,782
Default

Yeah, look I buy primarily from the same people as Elvis - but I have no issues getting hardware from Dell either.
NSanity is online now   Reply With Quote
Old 21st May 2016, 7:00 PM   #29
Smokin Whale
Member
 
Smokin Whale's Avatar
 
Join Date: Nov 2006
Location: Pacific Ocean off SC
Posts: 5,132
Default

Quote:
Originally Posted by elvis View Post
For home/SOHO users, there's also some interesting possibilities with hardware like this:

http://addonics.com/

Using simple eSATA-III connectivity and port-multiplier compatible cards, you can attach a pretty decent volume of disks to an existing system without a heck of a lot of complex hardware or cost. The end result is some great scalable storage hardware for home users who don't need 10Gbit/s+ speeds.

I've currently just got a crappy old tower system full of drives as my storage system, but really like the look of some of Addonics 6G "RAID towers":

http://addonics.com/category/6grt.php

Plug them straight into a compatible eSATA port, and you see all the drives direct over the one cable, ready for BtrFS and ZFS to use direct.
It's a shame that native eSATA 3 is going the way of the dodo. Very few systems have it nowadays. I'm the same though, I just have an old tower at home, does the trick

One thing I do find interesting though, is that you mentioned you used BTRFS as a boot drive on your laptop. Would I be right in saying that this filesystem is a worthy replacement for LVM or EXT4? It sounds like it's the way to go for SSD boot drives.
Smokin Whale is offline   Reply With Quote
Old 21st May 2016, 9:44 PM   #30
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,698
Default

Quote:
Originally Posted by Smokin Whale View Post
It's a shame that native eSATA 3 is going the way of the dodo. Very few systems have it nowadays. I'm the same though, I just have an old tower at home, does the trick
That Addonics gear is merely a SATA to eSATA connector. Nothing fancy there. They also supply eSATA 3gbit, 6gbit and mini-SAS SF8088 connectors to convert between these standards for different bandwith requirements.

Similarly, they sell a number of cards with controllers on board that support standard SATA multiplexing, which means several SATA connections can be shared down one cable (to the maximum aggregate bandwidth of a single connection, either 3gbit/s or 6gbit/s for eSATA, or 12/24Gbit/s for the mini-SAS).

All of that is pretty standard, and will be around for a while yet. PCI-E and M.2 is certainly picking up speed, but SATA and SAS will be around for a while yet. Addonics are pretty clever at doing smart things with low-end gear that gives you a lot of options if your software is smart enough. These guys were very popular with the Linux+MDADM+LVM crowd before ZFS and BtrFS came along, and now with these next gen file systems, they're a great option for hardware on the cheap.

Quote:
Originally Posted by Smokin Whale View Post
One thing I do find interesting though, is that you mentioned you used BTRFS as a boot drive on your laptop. Would I be right in saying that this filesystem is a worthy replacement for LVM or EXT4? It sounds like it's the way to go for SSD boot drives.
One of BtrFS's huge downfalls right now is that it doesn't support swap. That's coming (patches exist, but haven't made it to the mainline kernel yet), however right now you have to partition up your first disk still, with a dedicated swap partition, and BtrFS managing volumes inside the other partition.

You still get the benefits of BtrFS inside the partition boundary (unlike if it was inside some sort of device mapper setup that can move extents around underneath BtrFS without it's knowledge), but it's not as good as BtrFS on a raw disk.

Once BtrFS can properly deal with the equivalent of ZFS's "zvols", things will be much better on that front. Still, it's not terrible, and you get the important stuff like better SSD tuning, realtime block level compression, block checksumming and scrubbing, snapshots, etc.

Alternatively you can not have swap (not recommended), put swap on a different device, or make a loopback file marked NOCOW mounted as a device, and put swap on that (a bit ugly and slow, but it works if you REALLY want an all-BtrFS disk),
elvis is offline   Reply With Quote
Reply

Bookmarks

Sign up for a free OCAU account and this ad will go away!

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time now is 12:53 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
OCAU is not responsible for the content of individual messages posted by others.
Other content copyright Overclockers Australia.
OCAU is hosted by Micron21!