Advice for optimal RAID 0 settings for video

Discussion in 'Storage & Backup' started by drewfus, Sep 17, 2020.

  1. drewfus

    drewfus Member

    Joined:
    Aug 22, 2002
    Messages:
    332
    Location:
    Abbotsford VIC
    Hello again folks,

    Setting up a 4 drive RAID 0 for video editing. I'm aware of the risks of RAID 0, all of the media loaded on to this RAID will be backed up in triplicate prior. The purpose of this unit is to be big and fast (both read and write) above all else.

    This unit will store almost entirely large files. All small files like projects and graphics will be in the cloud, and cache files on a separate SSD.

    I'm using a Highpoint RocketStor 6124v enclosure with four HGST HC320 8TB drives (32TB)

    I'm of the understanding for large files it's best to go high for all values so I have set the RAID to:

    - Block Size = 1024k
    - Sector Size = 4k
    - Allocation Unit Size = 2MB

    Both HDTune and BlackMagic Speed Test seem to give slow, presumably inaccurate values when run. CrystalDiskMark throws up numbers more in line with what I'm expecting so hopefully it is accurate.

    [Read]
    Sequential 1MiB (Q= 8, T= 1): 784.370 MB/s [ 748.0 IOPS] < 10671.94 us>
    Sequential 1MiB (Q= 1, T= 1): 491.741 MB/s [ 469.0 IOPS] < 2130.65 us>
    Random 4KiB (Q= 32, T=16): 6.806 MB/s [ 1661.6 IOPS] <236063.82 us>
    Random 4KiB (Q= 1, T= 1): 4.501 MB/s [ 1098.9 IOPS] < 907.97 us>

    [Write]
    Sequential 1MiB (Q= 8, T= 1): 1002.416 MB/s [ 956.0 IOPS] < 8332.63 us>
    Sequential 1MiB (Q= 1, T= 1): 264.777 MB/s [ 252.5 IOPS] < 3952.77 us>
    Random 4KiB (Q= 32, T=16): 11.996 MB/s [ 2928.7 IOPS] <171486.64 us>
    Random 4KiB (Q= 1, T= 1): 8.081 MB/s [ 1972.9 IOPS] < 505.79 us>

    Profile: Default
    Test: 1 GiB (x5) [Interval: 5 sec] <DefaultAffinity=DISABLED>
    Date: 2020/09/16 0:47:11
    OS: Windows 10 Professional [10.0 Build 19041] (x64)

    I'm confused as to how I'm getting faster writes than reads out of this unit. It's advertised as topping out at 1000MB/s which is true for writes but reads seem slow by comparison at 784MB/s

    Another mystery is that the Highpoint software lists Read Ahead and Write Cache as not supported with these hard drives? And no Write Cache Policy looks to be available for the RAID Controller. But write caching is enabled in Windows for the RAID volume under policies. Not sure if this makes a difference. I tried it switched on and off while benching and it didn't impact the numbers.

    Aware also these are only synthetic numbers, and that my real world use case is wanting to be able to very quickly transfer new footage to this unit as it comes in, and then have snappy response time while editing, please let me know if you think I should modify these settings before I start to populate the RAID with data.
     
  2. HobartTas

    HobartTas Member

    Joined:
    Jun 22, 2006
    Messages:
    976
    If I had to speculate based on nothing at all I'd guess that reads are slower than writes because of the latency in reading the metadata first to find out where the data is and then actually reading the data whereas with writes the OS probably assembles the metadata and data together and writes it all out in one go and therefore is faster.

    Depending on how much space you need to work in you might be better off assembling like say four 1TB EVO SSD's in such a unit and then you would probably have at least 1.6 - 2.0 GB's to play with.

    The other alternative is to use ZFS which supposedly has good read ahead capabilities for doing sequential reads on files.
     
    NSanity likes this.
  3. OP
    OP
    drewfus

    drewfus Member

    Joined:
    Aug 22, 2002
    Messages:
    332
    Location:
    Abbotsford VIC
    Thanks for the reply. Capacity requirements rule out ssd at this point. Also on PC so can’t use ZFS. Any speculation on whether NTFS or exFAT would work better for me?

    Also, any advice on block/sector/ allocation unit size would be greatly appreciated.
     
  4. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    18,114
    Location:
    Canberra
    Bin the raid card and use Storage Spaces/Software Raid w/ tweaks.

    Also backup.

    Also you know you can get 8TB 2.5in SATA3 SSD's right...
     
  5. Butcher9_9

    Butcher9_9 Member

    Joined:
    Aug 5, 2006
    Messages:
    2,407
    Location:
    Perth , St James
    Well true, you can also get a 100TB 3.5" SSD but you might need to sell your left nut for the first and your house for the other.

    As for the Raid.

    Just a guess.
    your block size is pretty large (256K is standard I think) that might be affecting something.
    Also as with a lot of things advertised speeds and real world ones are often different. The way you can tell if its your controller bottlenecking a HDD arrays is if you run HDtune and the graph is a smooth curve from high to low then the HDDs are the bottleneck. If it has a flat area its controller bottlenecked.

    [​IMG]
     
  6. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    18,114
    Location:
    Canberra
    And herein lies the question - do you need this to make money, or not. (i'd also point out that 8TB QVO's are ~1400 - which sure is ~3-4x the cost of a raid-suited spinning rust disk - its not my left nut).

    Spending days trying to tweak something to make it viable VS just getting the right thing to start with and getting on with life.

    This sounds like a portable raw content ingest solution, and whilst the small block size IO performance benefits of SSD's is somewhat wasted on this application - basically even the cheapest SSD's these days will happily bounce off SATA3 effective read/write limits all day > 64KB blocks without relying on Cache - which will surely be exhausted on any actual ingest run - watching those mechanical HDD numbers plummet when cache inevitably runs out on a multi TB write.

    Bonus - its smaller.
    Bonus - it weighs less.
    Bonus - it chews far less power.
    Bonus - you can get some storage resilience reasonably cheaply (Raid5/6) without impacting throughput requirements.

    I'd be more concerned with how i'm getting my content in/out of the device (TB3 vs SAS12G vs 10+Gb Eth) and using it.

    Mapping that out - I could easily hide 4x 8TB 2.5in SSD's in my Louqe Ghost S1 - which is already packing a 3900x + 64GB ram + 4TB NVME + 2080 Super, inside of ~11L of case volume, which will happily edit 99% of content painlessly, making the editor likely the bottleneck.
     
    Last edited: Oct 3, 2020

Share This Page

Advertisement: