Tiered storage solution for creative agency

Discussion in 'Storage & Backup' started by Hmmmmmmm, Sep 12, 2018.

  1. gea

    gea Member

    Joined:
    May 22, 2011
    Messages:
    154
    On my last tests I was able to achieve over 600 MB/s read and 800 MB/s write with SMB on a multiple Raid-10 Pool (12 disks) quite constantly with an AjA video test (4k, RGB) on Windows 10 (MTU 9000, int-throttelling off). With an older OSX (If I remember correctly 10.11) the values with Disk Speed Test with MTU 9000 were quite the same.

    For a single user results with SSDs (Sandisk Pro 960) were only slightly better. With three concurrent running benchmarks the degration was lower with SSDs but quite stable over 500 MB/s. This was done on OmniOS (free Solaris fork). The results with the commercial Solaris and native ZFS were noticable better than OpenZFS from OmniOS.

    For sure SSD only pools are faster especially with concurrent use. In the above scenario with hot, medium and archive use, I would propably start with the disks for the medium use (disk based) and check if performance is sufficient. If not, I would add a second SSD or NVMe based pool for hot data and use replication to backup to the other pool. For archive, add a Jbod case (up to 90 disk bays) with an extra pool that start with a few disks ex a single Raid-Z2 and can grow with more Z2 up to the Petabayte range and connect with external SAS 12G.

    With current OSX (MacPro, 10G Promise Sanlink) I was not able to reach these values again. Even with SMB signing=off (a usual tuning suggestion) I never got more than around 300-400 MB/s for a single user. I have not compared iSCSI on current OSX but would expect this to be faster. With a few editing stations, a dedicated 10G ethernet link would be also an option with one or two additional dual 10G adapters.
     
    Last edited: Sep 14, 2018
  2. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,035
    Location:
    Canberra
    ^ my testing was single VM 32k records, atime=off, nfs/iscsi.
     
  3. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,094
    Location:
    Canberra
    Both NFS / ISCSI protocols for a VM would be sync (most hypervisor defaults), gea using SMB would have been async (samba / windows default), with a big enough SLOG I can transmit 100GB at 10gb/s to an underspecced disk array also ;)
     
  4. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,035
    Location:
    Canberra
    yeah but i had a SAS12G SLOG SSD drive that was doing ~40k unbuffered 4k writes.
     
  5. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,094
    Location:
    Canberra
    Without factoring in ZFS LBA allocation time / compression / checksums / network overhead calculations, for 40,000 whatever / second you'd need the entire end-to-end client to encoded bits on storage path to be committing that data in 25 ┬Ás, to allow for other un-factored overheads you'd want at least optane, welcome to the age old problem of serialisation delay :)

    Edit: That's if you're after a linear processing load.
     
  6. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,035
    Location:
    Canberra
    I think my point was that bursting pretty numbers on ZFS is easy - till you run out of ram/cache. And that supplying ram/cache for Video edit workloads is an exercise in futility.
     

Share This Page