1. OCAU Merchandise now available! Check out our 20th Anniversary Mugs, Classic Logo Shirts and much more! Discussion here.
    Dismiss Notice

Storage Spaces with PCIe SSD auto-tiering in Server 2012 R2

Discussion in 'Business & Enterprise Computing' started by Quantum Flux, Sep 7, 2018.

  1. Quantum Flux

    Quantum Flux Member

    Joined:
    Aug 1, 2005
    Messages:
    925
    Location:
    Canberra
    I'm building up a server using storage spaces on Server 2012 R2.

    Tried adding a SSD tier today using and Orico PRS2 set to AHCI mode with two M.2 drives. Seemed to work though haven't conducted any performance testing yet.

    The server will mostly be processing large files between 500Mb and 20GB.

    Has anyone tried using these Asus adapters with FOUR slots?
    https://www.asus.com/au/Motherboard-Accessory/HYPER-M-2-X16-CARD/
    Can anyone confirm if they will support AHCI mode so as to present the raw disks to the OS?

    Also, can anyone explain how Storage Spaces works on 2012 R2 vs S2D in 2016? I think the former is at file level and the later is block level. Does this mean the former would only ever store a file in a single SSD no matter how many are in the server, and thus limit throughput performance to a max of one SSD?

    If the SSDs are all 120GB, does this also mean that no file can ever be held in the cache tier if it's larger than 120GB, or if the SSD is already partly full?
     
    Last edited: Sep 7, 2018
  2. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,912
    Location:
    Canberra
    Storage Spaces presents a Lun from 1 or more disk groups. It has never been a "file-based" platform.

    That said, its possible to pin files to a performance tier as opposed to simply letting storage spaces work it out with a daily optimisation job.

    I would start with Server 2016 - and probably hold out for 2019 at this point, due October 1, thanks to the improvements with ReFS/Storage Spaces such as dedupe/compression on ReFS.
     
  3. OP
    OP
    Quantum Flux

    Quantum Flux Member

    Joined:
    Aug 1, 2005
    Messages:
    925
    Location:
    Canberra
    We do have SA on these, so could use 2016 Std. However, I’m putting it on 4 socket servers. So I think we’d be up for buckets because of the new core count licensing model.
     
  4. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,912
    Location:
    Canberra
    What are you actually trying to achieve?

    Do you actually need das latency?
     
  5. OP
    OP
    Quantum Flux

    Quantum Flux Member

    Joined:
    Aug 1, 2005
    Messages:
    925
    Location:
    Canberra
    It’s an explicit requirement of the software that’ll be running
     
  6. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,912
    Location:
    Canberra
    What is? Tiered storage spaces? I almost certainly doubt it.
     
  7. OP
    OP
    Quantum Flux

    Quantum Flux Member

    Joined:
    Aug 1, 2005
    Messages:
    925
    Location:
    Canberra
    DAS is a requirement, so says the documentation.

    Since I'm migrating servers anyway, I figured it a good opportunity to upgrade the storage.
    Adding a flash tier with Storage Spaces seemed like a cheap and easy way to improve performance.
     
  8. OP
    OP
    Quantum Flux

    Quantum Flux Member

    Joined:
    Aug 1, 2005
    Messages:
    925
    Location:
    Canberra
    I see block level de-dup is coming to ReFS with 2019. That's something that would benefit this application significantly I suspect.

    Unfortunately this was an unplanned project. The boss pretty much said we need to scale up in preparation for triple workload beginning October 1. :shock:

    I happened to have some very beefy servers sitting in the rack doing nothing. Only problem is the storage isn't spectacular. Adding a flash tier seemed like a reasonable shortcut to get us going (for now).
     
  9. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,912
    Location:
    Canberra
    Based on what?

    Applications generally don't care where they are - there are some exceptions but they are few and far between. Apps really come down to latency and throughput. Very few apps will be mandating NVME-like numbers (and most don't mandate SSD's even).

    You say you have a massive workload to scale - this is the antithesis of DAS. Unless the product has some kind of scheduler or job management engine to distribute across many nodes.

    Whats the product - if you can say?
     
  10. OP
    OP
    Quantum Flux

    Quantum Flux Member

    Joined:
    Aug 1, 2005
    Messages:
    925
    Location:
    Canberra
    The product does have a scheduler engine that distributes across many nodes. I'd prefer not to say what the product is.

    But I take your point. I might do some testing against the main SAN and see if it's any better or worse than DAS.
     
  11. g00nster

    g00nster Member

    Joined:
    Sep 10, 2004
    Messages:
    352
    Location:
    Melbourne
    Have you seen the Intel Optane 905p cards?

    https://www.intel.com.au/content/ww...s/optane-905p-series/905p-960gb-aic-20nm.html
    https://www.scorptec.com.au/product/Hard-Drives-&-SSDs/SSD-2.5-&-PCI-Express/72941-SSDPED1D960GAX1

    They just scream "cache" drive and have crazy low latency and support very high I/O for QD1. Endurance is pretty good too at ~41 PBW
     
  12. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,379
    Location:
    Canberra
    Business use of S2D, a filesystem without bitrot protection :sick:

    If you need IOPS, the 4TB - P4510 is the sweet spot today on price to performance and capacity.
    Write latency - 18 µs
    Read latency - 77 µs

    Optane U.2 has a 10 µs Read and Write latency although slower sequential throughput to the P4510 due to its massively reduced size (375 / 750) and price premium.

    A 24 x U.2 chassis can be had relatively cheap these days, even Dell rocks the EMC Epyc 24 bay servers, these solutions are incredibly cheap to comparative RAM SANs back in the day and even E-Series Netapp gear.
     
  13. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,912
    Location:
    Canberra
    eh?

    ReFS has bitrot protection...

    Then again, you did say S2D - which doesn't really dictate a filesystem.
     
  14. OP
    OP
    Quantum Flux

    Quantum Flux Member

    Joined:
    Aug 1, 2005
    Messages:
    925
    Location:
    Canberra
    I will be using ReFS. But the data is very short lived anyway. I's kept for about a week at the most, maybe two.

    Thanks for the tip on Optane. I do need to run a small database on these servers too, so I might try it out.
     
  15. wwwww

    wwwww Member

    Joined:
    Aug 22, 2005
    Messages:
    5,908
    Location:
    Melbourne
    They present the raw disks to the OS only but in NVMe mode, not AHCI. This works fine for Storage Spaces 2016, I imagine it would for 2012 R2 too. They don't work in every server though, you need to make sure your server supports PCIe bifurcation.

    2012 stripes blocks across disks so I don't see any reason why you'd be limited to the SSD size.
     
  16. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,912
    Location:
    Canberra
    Don't use ReFS on anything less than Server 2016 patched to about June 2018. Particularly if you're working with large volumes.
     

Share This Page

Advertisement: