Overclockers Australia Forums

OCAU News - Wiki - QuickLinks - Pix - Sponsors  

Go Back   Overclockers Australia Forums > Specific Hardware Topics > Storage & Backup

Notices

Reply
 
Thread Tools
Old 15th September 2017, 9:42 AM   #16
Doc-of-FC
Member
 
Doc-of-FC's Avatar
 
Join Date: Aug 2001
Location: Canberra
Posts: 2,735
Default

https://support.microsoft.com/en-us/...ll-creators-up

Win 10 1709 killing ReFS creation, PRO 4 wkstn / ENT only feature
Doc-of-FC is offline   Reply With Quote
Old 15th September 2017, 9:59 AM   #17
Myne_h
Member
 
Join Date: Feb 2002
Posts: 6,849
Default

I did this... 7 years ago

http://forums.overclockers.com.au/sh...d.php?t=877062

Worked fine for about 5 years. Finally pulled it apart a couple of years ago when one drive got a bad sector.

Was fast enough for dumb storage.
Myne_h is offline   Reply With Quote
Old 15th September 2017, 10:01 AM   #18
Myne_h
Member
 
Join Date: Feb 2002
Posts: 6,849
Default

Quote:
Originally Posted by Doc-of-FC View Post
https://support.microsoft.com/en-us/...ll-creators-up

Win 10 1709 killing ReFS creation, PRO 4 wkstn / ENT only feature
So... you can read and write but not format?
Seems like someone will have a reghack for it in minutes. Or just use a workstation install disc to create it.
Myne_h is offline   Reply With Quote
Old 15th September 2017, 12:56 PM   #19
frenchfries
Member
 
Join Date: Apr 2013
Posts: 77
Default

Quote:
Originally Posted by terrastrife View Post
For RAID10 I would just get a cheap Highpoint card. They double as HBAs when you're done with them too.
LSI cards tend to have horrible performance in RAID10 mode but 0/1 are fine.
But yeah, most motherboards do RAID10 just fine.
Proper megaraid cards do raid 10 quite well, it's only the little cards that don't do it properly.
frenchfries is offline   Reply With Quote
Old 15th September 2017, 2:03 PM   #20
cvidler
Member
 
cvidler's Avatar
 
Join Date: Jun 2001
Location: Canberra
Posts: 10,615
Default

Quote:
Originally Posted by dmr View Post
I was under the assumption hardware raid1 was always faster than software raid1?
That's very outdated thinking. CPUs have moved forward at a pace far greater than RAID cards have. And if you're only doing 1 or 10 all you're doing is duplicating the data, not even any XOR calcs (which is what slows 5 and 6). You're also more likely these days to have plenty more spare RAM for cache than a RAID card will have.

Quote:
whats the best raid software everyone is using now? I want to avoid using the M/B raid because if my mobo dies, I'll most likely need to source the same mobo to get my system back up again?
Depends on the platform you want to run. If you're not going to use the onboard RAID option, that leave a OS based options.

Windows - no real choice here. (Server variants give you R5 as an option, Workstation variants only R1/10)
Linux/BSD/Solaris etc. - lots of options (LVM, BTRFS, ZFS etc.) of various features, complexity, and speed.
__________________
We might eviscerate your arguments, but we won't hurt you. Honest! - Lucifers Mentor
⠠⠵
[#]
cvidler is offline   Reply With Quote
Old 16th September 2017, 9:40 AM   #21
fad
Member
 
fad's Avatar
 
Join Date: Jun 2001
Location: City, Canberra, Australia
Posts: 2,063
Default

Windows seems limited in features and performance unless you scale. I am talking 2+ servers and 12-24 disks. I have yet to look at the W2016 features but in W2012R2 the only RAID levels with performance were mirroring. Also to get performant arrays required SSDs in the same quantity as the drive stripe size. I know some of the constraints around SSDs and stripe sizes have been reduced in W2016. Also, the requirement of HA attached SAS drives for dual heads has been removed. Which moves it to better align with Software Define Storage, rather than traditional SAN storage.

I have been using Ubuntu 16.04.3 with ZFS and have found it to have really good performance and fewer constraints. I have moved a whole bunch of machines to this config.

That being said I run Windows Storage Spaces, ZFS, and hardware raid at home. They all have their place.
fad is offline   Reply With Quote
Old 16th September 2017, 11:14 AM   #22
fredhoon
Member
 
fredhoon's Avatar
 
Join Date: Jun 2003
Location: Brisbane
Posts: 2,067
Default

Quote:
Originally Posted by fad View Post
That being said I run Windows Storage Spaces, ZFS, and hardware raid at home. They all have their place.
What are you using HW RAID for at present, how does the write speed compare with a similar RaidZ config in your experience?
__________________
Quote:
Originally Posted by NSanity View Post
Does your Agile Full Stack Token Ring Dev role include your research into ideas that were stupid 30 years ago and are still stupid today?
go soothingly on the grease mud as they're lurks a skid demon
fredhoon is offline   Reply With Quote
Old 16th September 2017, 11:52 AM   #23
fad
Member
 
fad's Avatar
 
Join Date: Jun 2001
Location: City, Canberra, Australia
Posts: 2,063
Default

It depends on the config of all of the parameters, and size.

HW RAID is very limited to the unit you have. The card bandwidth and speed, both internal to the SAS ports and to the PCI-e. It has limited capability to detect errors, and are error-prone unless you are running supported configs and firmware. They, however, are easier to configure and use.

I have a Dell PowerEdge R720 with 16x1Tb SAS drives on an H710p PERC RAID card, and 24x1Tb SAS on a dual port external card. I also have an ARC-1203-8I areca 8 port card with a few WDC Reds.

The speeds on the Dell are around 1200MiB/sec, QD=32 small IO, are not that good, around 30-50MiB/sec. Write speed for large blocks was around 800MiB/sec.

I have run ZFS on a Dell R720xd, 24x1Tb SAS, 4x256Gb SSD with a LSI 9211, with dual ports. The speed was very good, I think the numbers were around the same or higher for 64k block IO. Where ZFS was really good was small random IO, with memory and SSD caching the random IO performance was in the 120k IOPS.

Hardware RAID has a penalty for the XOR calc, which modern CPUs can do much faster. Most CPUs can do in the 2+GiB/sec. I have not used SSD caching with any modern RAID card.
fad is offline   Reply With Quote
Reply

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time now is 9:25 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
OCAU is not responsible for the content of individual messages posted by others.
Other content copyright Overclockers Australia.
OCAU is hosted by Micron21!