PCIe Bandwidth Question

Discussion in 'Overclocking & Hardware' started by KaiShin, Jun 18, 2019.

  1. KaiShin

    KaiShin Member

    Joined:
    May 30, 2005
    Messages:
    60
    I am preparing to install a recently purchased HBA (LSI SAS 9202-16e), it is designed to use a 16x PCIe 2.0 slot which has 8GB/s bandwidth.

    Since an 8x PCIe 3.0 slot also has 8GB/s (7.88) bandwidth would the card still be able to reach similar performance?

    I am looking at running 16 3.5" 7200rpm HDD's off this which are running in a Storage Spaces parity array (will be 32 in total across 2 cards). I know array performance isn't fantastic in this setup, however it is about data protection and not speed (Plex media only).

    I do have 2 16x slots available, however it would mean I would need to move my GPU to an 8x slot which is in a custom hard line loop, as you can image if I can avoid this I would prefer to.

    Apologies if this is a stupid question, I wasn't quite sure if just the bandwidth is what I should be looking at or whether there were other factors to take into consideration here.
     
    Last edited: Jun 18, 2019
  2. im late

    im late Member

    Joined:
    Jan 5, 2012
    Messages:
    1,657
    Location:
    Canning Vale WA 6155
  3. OP
    OP
    KaiShin

    KaiShin Member

    Joined:
    May 30, 2005
    Messages:
    60
    Thanks for the reply.
    I am a little confused about the fitment side of this are you talking about physical fitmet?

    On my mobo (x399 Gigabyte Aorus 7) all the PCIe slots are the same physical size, with 2 16x PCIe 3.0, 2 8x PCIe 3.0 and one 4x PCIe 2.0.

    Are the pins inside the slots themselves different?
     
  4. Myne_h

    Myne_h Member

    Joined:
    Feb 27, 2002
    Messages:
    10,244
    Physically an 8x slot may not fit a 16x card. It's shorter and not all of them have a cutout at the end.

    Electrically, a 16x card will work in an 8x slot.
    If it's only capable of running at PCIE2 though, it will lose performance. Whether that's measurable is another question.

    If it'll run at PCIE3, then it should be about the same.
     
  5. OP
    OP
    KaiShin

    KaiShin Member

    Joined:
    May 30, 2005
    Messages:
    60
    Thanks for your reply, the part I put in bold is the part I missed in my head...
    Just because my board can run it at PCIe 3.0 the card won't be able to... so therefore in an 8x slot it will run at 2.0 speed not 3.0 speed...

    So in short, I do have to run them in the 16x slots or get a hit in performance in the 8x slots.

    Thanks for your help!
     
  6. Myne_h

    Myne_h Member

    Joined:
    Feb 27, 2002
    Messages:
    10,244
    The speed is different to the lanes.

    A 16x slot has 16 lanes. Think of it like 16 ethernet cables to the card all working as a team.
    An 8x slot has 8 lanes.

    To stay with the ethernet analogy:
    PCIE2 runs at 500mbit, PCIE3 runs at 1gbit. (I forget the real numbers)

    So an 8x slot at PCIE3 performs the same as a 16x slot at PCIE2.

    Make sense?
     
  7. OP
    OP
    KaiShin

    KaiShin Member

    Joined:
    May 30, 2005
    Messages:
    60
    It does absolutely.
    However what I was missing in my thinking is that a PCIe 2.0 card can't take advantage of the PCIe 3.0 bandwidth.

    So in theory the bandwidth is the same but in practice the 2.0 card can't utilise all of the 3.0 8x PCIe bandwidth.

    The same as a 100mbit Ethernet client can't run at 1gb even if you plug it into a 1gb switch.
     
  8. Myne_h

    Myne_h Member

    Joined:
    Feb 27, 2002
    Messages:
    10,244
    Exactly!

    The real question is whether it will be noticable.
    I'm going with probably not.
     
  9. OP
    OP
    KaiShin

    KaiShin Member

    Joined:
    May 30, 2005
    Messages:
    60
    Thanks very much for your help.

    Let me have a play around and I can report back whether I notice a difference. Given the array runs fairly slowly anyway I am certainly not expecting a notable difference.
     
  10. Quadbox

    Quadbox Member

    Joined:
    Jun 27, 2001
    Messages:
    6,338
    Location:
    Brisbane
    It wont improve performance to stick it in a 16x slot either... It will use 8 lanes either way, at pcie2.0 speeds. As long as it's seeing 8 lanes (i.e. not in a pcie16x slot limited to 4x lanes for cpu/chipset reasons) it's getting the best performance it's capable of getting. Plugging it in to a 16x slot adds no performance. If this is spinning iron you're talking about, it wouldnt matter if you were plugging it in to a 4x slot probably.
     
  11. ae00711

    ae00711 Member

    Joined:
    Apr 9, 2013
    Messages:
    1,576
    why two cards?
    why not one card + expander?
    you're not going to notice the performance diff re: lanes/slots etc, nor if you use one card + expander
     
  12. OP
    OP
    KaiShin

    KaiShin Member

    Joined:
    May 30, 2005
    Messages:
    60
    If the card is designed as a 16x card wouldn't it use all 16 lanes at PCIe 2.0 speed?
    Either way, you are likely right, with spinners its unlikely I will notice a difference.

    This is a good question, and there is a 3 part answer:
    - I got both HBA's rather cheaply from the forums
    - I will likely again need to expand my storage in the future so was planning to expand off both of these cards if I need to
    - I am hoping by the time these HDD's decide to give up the ghost large storage SSD's will be affordable and using an expander will likely be too slow
     
  13. terrastrife

    terrastrife Member

    Joined:
    Jun 2, 2006
    Messages:
    18,722
    Location:
    ADL/SA The Monopoly State
    Consider dropping Windows entirely and use UNRAID for your Plex server. Why would you want to spin up 32 drives just to play one movie?
     
  14. OP
    OP
    KaiShin

    KaiShin Member

    Joined:
    May 30, 2005
    Messages:
    60
    I came from an unraid install and recently moved back (unraid for approx 18 months) to Windows.

    I just constantly had problems with unraid, either with the dockers or VMs, if they were running correctly unraid would keep incorrectly dropping drives from the array causing me to have to rebuild parity 2 - 3 times a week there was just no end of issues, no one was able to diagnose what was wrong so in the end I gave up.

    Since moving back to Windows with exactly the same hardware I've had no issues what so ever, I'm also running another Windows VM with Virtual Box with no issue.

    It just seemed unraid did not like my setup.
     
    Last edited: Jun 29, 2019
  15. terrastrife

    terrastrife Member

    Joined:
    Jun 2, 2006
    Messages:
    18,722
    Location:
    ADL/SA The Monopoly State
    big offs, maybe it doesn't like the PLX chip used to split the pcie bus.
     
  16. OP
    OP
    KaiShin

    KaiShin Member

    Joined:
    May 30, 2005
    Messages:
    60
    Its possible...
    A mate of mine with the same mobo & CPU was having similar issues...
     
  17. Kelvin

    Kelvin Member

    Joined:
    Jul 1, 2001
    Messages:
    2,593
    Damn those weird Amd issues...

    Many years ago.. had all sort of weird issues with my mediacentre pc which was am2+ based..

    Replaced the board and cpu with an intel G31..... and everything worked absolutely perfectly...

    This wasnt the first and or last time switching from amd to intel solved my problem..
     
  18. HobartTas

    HobartTas Member

    Joined:
    Jun 22, 2006
    Messages:
    904
    If they are all x16 physical slots then x16 electrical slots will be fully wired up, x8 slots will be wired up 50% and x4 slots only 25%, the advantage of having a larger physical slot like the x16 slot is if there is room on the motherboard for the slot itself is that it can then take all card sizes that are x1, x4, x8 and x16.

    An x16 card like yours is designed to work in x16 electrical slots but if you check your documentation it may say the minimum is x8 electrical to function and it would surprise me (but not impossible) for it to work in an x4 electrical slot. You need to have it in a physical x16 slot for it to fit and then it presumably will work with either an x8 or x16 electrical slot. From memory this is different to the old PCI-32 slots where if you had a PCI-64 card it would work in regular PCI-32 slot because it would have a notch in the middle of the pins and that meant it could go over the end of the PCI-32 slot and still fit into it but the only other problem you had was whether or not there was a capacitor or something else sticking up from the motherboard blocking it but with PCI-e the card has to physically fit in the slot so longer physical slots are always preferred regardless of what they are electrically. You can see in this photo the extra pins for a PCI-64 card just hanging in mid-air but since the card is backwardly designed to work in regular PCI-32 slots it usually wasn't a problem.

    PCI-e v2 is 500 MB's per lane bi-directionally and PCI-e v3 is 1000 MB's per lane bi-directionally and in an v2 x8 slot you'll get 4 GB's and in an v2 x16 slot you'll get 8GB's, if its v3 then you can double those numbers, either way I don't think you'll have anything coming close to using any of those bandwidths so it probably doesn't matter what mode it runs in or whether it has x8 or x16 electrical lanes.
     

Share This Page

Advertisement: