RAID Card recommendations

Discussion in 'Storage & Backup' started by Primüs, May 10, 2019.

  1. Primüs

    Primüs Member

    Joined:
    Apr 1, 2003
    Messages:
    3,368
    Location:
    CFS
    Hi Team,

    Looking for a RAID card to chuck into a new server - 2RU with low-profile slots. PCI-e of course.

    I have 6 hotswap bays, so looking for minimum 6 ports, maybe 8 ports (as may mount 2 SSD's internally). Ultimately i'll only be using SATA drives but SAS/SATA ok too.

    Looking to use up to 10TB single drives, and be able to do multiple volumes in one hit - wanting to do a RAID 1 and a RAID5 (or 6) on the same card.

    Will be using with oVirt (Enterprise Linux) backend - this is all for home stuff so looking at something reasonable in terms of price.

    Lots on eBay, I just dont know what are good brands, which models may have caveats etc so any input greatly appreciated.
     
  2. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    36,002
    Location:
    Brisbane
    Any reason why you don't use Linux mdraid?
     
  3. HobartTas

    HobartTas Member

    Joined:
    Jun 22, 2006
    Messages:
    762
  4. Matthew kane

    Matthew kane Member

    Joined:
    Jan 27, 2014
    Messages:
    2,079
    Location:
    Melbourne
    I have a couple RAID cards if you're interested. SATA and SAS interface (you can use a mini sas to sata cable).
     
  5. OP
    OP
    Primüs

    Primüs Member

    Joined:
    Apr 1, 2003
    Messages:
    3,368
    Location:
    CFS
    No major specific reason i suppose - didnt want the extra overhead on CPU + management of the mdraid in particular if thats even a concern, and being ovirt node im pretty sure they strip it back heaps and would like to be able to re-deploy in fairly streamlined process in case of any failures - are you suggesting in a single host ovirt setup that mdraid may be just as good an option? I trust your judgement since you work with big Linux setups (or have, not sure what your doing these days)

    That is a great list - thank you heaps.

    Great - what have you got available that would fit my needs and how much would you be looking for them?
     
  6. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    36,002
    Location:
    Brisbane
    We're using exclusively mdraid for OS disks now (and even fast local storage like NVMe SSD RAID for frame cache), and ZFS for larger scale storage.

    Things like render nodes and oVirt nodes are all on single disks, because they don't hold any local configuration, and we have lots of them. If one blows up, the process is to replace the drive and redeploy the OS, which is 10 minutes at most.

    CPU overhead wise, it's bugger all. When you modprobe the RAID driver, it spits this sort of thing out:

    Code:
    $ dmesg  | grep -i raid 
    [    6.996010] raid6: sse2x1   gen()  9291 MB/s 
    [    7.044009] raid6: sse2x1   xor()  6709 MB/s 
    [    7.092004] raid6: sse2x2   gen() 11161 MB/s 
    [    7.140008] raid6: sse2x2   xor()  7406 MB/s 
    [    7.188009] raid6: sse2x4   gen() 13293 MB/s 
    [    7.236008] raid6: sse2x4   xor()  9024 MB/s 
    [    7.237249] raid6: using algorithm sse2x4 gen() 13293 MB/s 
    [    7.238471] raid6: .... xor() 9024 MB/s, rmw enabled 
    [    7.239686] raid6: using ssse3x2 recovery algorithm
    
    $ grep ^'model name' /proc/cpuinfo | head -1 
    model name      : Intel(R) Core(TM) i7-3632QM CPU @ 2.20GHz
    
    13 GB/s (single threaded, so that scales) on my 8 year old laptop. I've seen modern Xeons double that number with ease. The overhead is so negligible these days.

    The only downside is where you've got large scale storage, and RAID controllers and backplanes do a nice job of blinking LEDs on failed drives to make life easier finding the failed drive. But for smaller setups, and certainly home labs, I haven't bothered with hardware RAID in forever.
     
  7. OP
    OP
    Primüs

    Primüs Member

    Joined:
    Apr 1, 2003
    Messages:
    3,368
    Location:
    CFS
    This is my biggest issue using ovirt with single host and internal disk's makes it a bit of a pain to redeploy which is why I was looking at a RAID1 OS along with the VM engine. I know it's not a normal use case for ovirt but I need to learn it more for upcoming projects and want the flexibilities of it in case I do add extra nodes and gluster some filesystems

    I'll look into mdraid-ing it all. Might need to move away from the cut down ovirt node and start with a CentOS install.
     
  8. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    36,002
    Location:
    Brisbane
    This is what we initially used when we first tested and POC'ed oVirt. We moved to ovirt-node some time later.

    No real difference in the end result, TBH. The "ovirt-node" setup is more akin to what enterprise ESXi users are more familiar with (compute, network and storage all separate). But if you're setting something up where your compute and storage are in the same box, a "oVirt-on-CentOS" setup is far more flexible.

    On my long list of things to RnD for my company is a "small enterprise in a box" setup, built from pieces like oVirt and Samba4 (for Windows 10 clients), one-touch deployed by PXE. The idea being when someone want to spawn a remote site, 2-months-and-its-finished type environment (common when we're doing a film or a reality TV series out in the desert/bush/islands somewhere), then I can get everything working with minimal cognitive effort for broadcast engineers (who do not make good IT people) and running on a minimal hardware footprint, and zero server licensing or CALs. That's definitely going to end up with mdraid and oVirt on fatter OS, rather than ovirt-node.
     
  9. ryanfav

    ryanfav Member

    Joined:
    Mar 8, 2009
    Messages:
    23
    Location:
    Sydney
    If your still after a raid card, Raid Rocket 2320, currently in the process of swapping my 2TB drive arrays up to 10TB drive arrays, it does not support raid 6, rebuild times are only limited by the drive transfer rates. 400MBps read, 330MBps write on both of my 4x2TB raid 5 array. and random writes and burst writes beat the pants off my SSD's due to caching,

    On the high end of town, if you go down the path of SAS controllers, make sure you get the cables in the correct direction, the cables are directional, generally you are looking for a "forward" cable, where the raid card plug is the "host", I picked up a cheap LSI 9280-16i4e with battery backup for my other NAS and got bitten on the cables being the "reverse" type. same story with the Adaptec 71605E i ended up with before this, the "E" version is the HBA style card, and the cables the seller recommended where "reverse" as well......
     
  10. Matthew kane

    Matthew kane Member

    Joined:
    Jan 27, 2014
    Messages:
    2,079
    Location:
    Melbourne
  11. A||uSiOn

    A||uSiOn Member

    Joined:
    Jul 2, 2001
    Messages:
    825
    Location:
    NSW - SYDNEY

Share This Page