1. OCAU Merchandise is available! Check out our 20th Anniversary Mugs, Classic Logo Shirts and much more! Discussion in this thread.
    Dismiss Notice

When is consumer 10GbE going to happen?

Discussion in 'Networking, Telephony & Internet' started by Smokin Whale, Sep 21, 2014.

  1. Gibbon

    Gibbon grumpy old man

    Joined:
    Jun 30, 2001
    Messages:
    6,597
    Location:
    2650
    Cheers, that is interesting. I'd only be choosing between an X520-DA2 (which I actually already have) and an X710-DA2 - it'll be fibre either way.
    Looks like the X710 only uses about 60% of the power of the X520 (5 Gbps/watt vs 3 Gbps/watt).

    Both 2 port, so directly comparable in this graph.

    That's actually quite significant.

    upload_2021-10-27_21-13-47.png
     
    Last edited: Oct 27, 2021
    Symon likes this.
  2. phreeky82

    phreeky82 Member

    Joined:
    Dec 10, 2002
    Messages:
    9,816
    Location:
    Qld
    I am running a relatively SFF case, have a 10Gb copper NIC in there and although they feel hot to the touch you do need to realise that it's a card with no active cooling - it's a fraction of the heat coming from the other devices in the PC. My case does have some intake ventilation at the base but no fan around it. It is also < 2cm from my 6700XT and even still the GPU doesn't spin the fans up during normal desktop use.

    Single-digit watts in a PC is nothing these days.
     
    Gibbon likes this.
  3. davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    3,155
    I run mine a no/low airflow case and no issues - using single fiber SFP module.
    I can imaging the 10GBE get hot.
     
    Gibbon likes this.
  4. Gibbon

    Gibbon grumpy old man

    Joined:
    Jun 30, 2001
    Messages:
    6,597
    Location:
    2650
    Good one - doesn't sound like there should be any issues then :thumbup:
     
  5. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    9,011
    Location:
    Adelaide, SA
    Wow, I don't know if it's Asus' implementation of; or the overall kwalitee of the Aquantia (now Marvell) 10gig card but they're an utter steaming pile of shit.
    Avoid at all costs unless you absolutely need to sacrifice reliability+simplicity+SR-IOV+RDMA for WOL over fibre (like I did for just one machine :sick:).
    3 times the price of Mellanox, 30 times the fucking around with to get it working right. Fucking bullshit.
    Half of it. The other half is somewhere between mainland China and Sydney. I'm betting the Sydney-other AU mainland capital is the slowest leg of the journey.
    My x540 dual port used to be in a fair airflow rig (not a proper server though). It was don't touch it for 10 minutes after a hard thrashing. 10G SFP+ is positively a breath of fresh air - even when using dual 40G QSFP version cards.
     
  6. phreeky82

    phreeky82 Member

    Joined:
    Dec 10, 2002
    Messages:
    9,816
    Location:
    Qld
    I have 2 of the TP-Link TX401. Cheap as chips and rock solid, both worked out of the box (Linux machines) and have not had the slightest of issues. In fact any connection I have - from crummy old Cat5e runs to new Cat6a shielded - and if there is continuity across all 4 pairs then these will push 9.5Gbps all day long.
     
  7. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    9,011
    Location:
    Adelaide, SA
    Given the commentary online, I'm leaning towards Asus doing something funky with their implementation.
    As long as I use the original firmware and downgrade the default 2020 Windows 10 driver to Asus 2018 one, it seems to work as expected. Update to any of the modern-ish firmwares and/or drivers and shit gets fucked - I can shoved a 100g folder full of movies from NVMe to NVMe at 10g, but make it a 1g folder full of small files and it slows to a crawl and stalls so bad you can't quit the copy.
    Fire up iperf and hit it unidirectionally and it seems good. Do it full duplex (which Mellanox & Intel both give zero fucks about) and weird pauses in traffic happen - 20g, 20g, 20g, 20m, 20g, 20g, 20g, 20m...
    Unless of course you used the wrong driver with the wrong firmware, then it just shits the bed three seconds into a FD iperf and need a reboot to work again for another three seconds.
    Or you upgraded to the latest firmware and then the Intel SFP which worked perfectly with all the other revisions & MLX cards flaps like a motherfucker. Then you discover the firmware downgrade process is also utterly fucked.
    What a joke. WOL over fibre has given me a few new grey hairs.
     
  8. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    9,011
    Location:
    Adelaide, SA
    This is the kind of weird shit I'm trying to work out. Since starting the 10g migration, I've been doing x265 video renders with the source over the network.
    weird_shit_network_render_MLX.jpg
    Which looks like this from a taskmanager perspective. Data flows in, CPU processes it and the world is happy. The performance penalty is minuscule with the source on the other end of the network versus dragging it locally. Handbrake barely notices, even when I have three machines rendering off that share.

    However, move the SFP module to the Asus card next door, run the same handbrake script and taskmanager looks like this. Notice anything different? weird_shit_network_render_ASUS.jpg
    Render performance is damn nearly halved. Despite data rates sub-1G, the Asus card cannot keep the CPU fed.

    Why yes, I've checked in HWI to make sure both cards are directly connected to the PCIe3 CPU lanes.
    Hit 'em with iperf and the Asus card is fractionally faster with ~9.9x2 versus ~9.8x2. In the real world?

    The next step I reckon is to hope all the driver/firmware stuffing around has bolloxed something software and blow the OS away for a fresh new load. *sigh* The more I fiddle with this card the less I want to keep it.
     
  9. Symon

    Symon Castigat ridendo mores

    Joined:
    Apr 17, 2002
    Messages:
    5,088
    Location:
    Brisbane QLD
    I think it was dakiller who posted up the technical reasons why 10GBase-T chews up much more power, and hence runs hotter than SFP/SFP+.

    Having said that, even my SFP+ cards run too hot for my liking so they all have 40mm fans on them.
     
    Gibbon likes this.
  10. Gibbon

    Gibbon grumpy old man

    Joined:
    Jun 30, 2001
    Messages:
    6,597
    Location:
    2650
  11. phreeky82

    phreeky82 Member

    Joined:
    Dec 10, 2002
    Messages:
    9,816
    Location:
    Qld
    Check all of the offload settings between cards. 10Gb is heavy on CPU utilisation, offload greatly reduces that and could show significant performance differences - particularly while you're also working the CPU hard. It would probably spike like you're seeing.
     
  12. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    9,011
    Location:
    Adelaide, SA
    Still fiddling with it, makes no difference if the offloads are on or off, nor does maxing the send/rcv buffers. Just for lulz, I did a quick render over the onboard 1g Intel NICs ... and it was better.
    I've found an online reference for disabling "Recv Segment Coalescing (IPv4)" for the copper version but that option simply isn't present in any of the drivers I've tried.

    EDIT: if I use powershell Get-NetAdapterRsc only the MLX has the option and it's disabled.

    At this stage I'm ready to hit up the vendor for help and failing that a return. For what this sucker cost over a far superior Mellanox, i could leave the intended machine running for 9 months and have zero need for WOL.
     
    Last edited: Oct 29, 2021
  13. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    9,011
    Location:
    Adelaide, SA
    Buggerit, I'm done with sodding around with this thing. Vendor better have a magic fix or I'm down for returning.
     
  14. phreeky82

    phreeky82 Member

    Joined:
    Dec 10, 2002
    Messages:
    9,816
    Location:
    Qld
    If they can't point you to drivers that work then just return it I guess. I'd have thought it would be a plug-n-play ordeal, even for Windows these days.

    You could also try running the driver from another NIC running the same chipset. It may fail but not much harm in trying. There really isn't much between all the different NICs - they pick a chipset, design a PCB around it with a heatsink and backplate and otherwise are essentially identical.
     
  15. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    9,011
    Location:
    Adelaide, SA
    That's the plan of attack.
    Honestly, this feels like its 2002 all over again and I'm network support a LAN party where every munged rig is using a bloody Realtek NIC...
    There are multiple version of drivers and firmwares available if you know where to look. They pretty much just make everything worse instead of better. Bugged performance, stability, card crashes and right down to no function at all.

    I'm staggered. It's been a real long time since I've had a network card be this shit.
     
    Dr Evil and Pugs like this.
  16. OP
    OP
    Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,188
    Location:
    Pacific Ocean off SC
    Holy crap I can't believe this thread is still going... damn, who would have thought progress would have been this slow :(

    Most of my network still on gigabit, 6+ years on. Admittedly, it doesn't matter most of the time - but there are moments where I definitely would like more bandwidth. At least my main NAS has 10Gbe now, that made a huge difference.
     
    Last edited: Nov 18, 2021
    Pugs likes this.
  17. Pugs

    Pugs Member

    Joined:
    Jan 20, 2008
    Messages:
    9,575
    Location:
    Redwood Park, SA
    Yep.. most of the big networking brands don't even offer 2.5 or 5Gbe...
     
    Aetherone likes this.
  18. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    9,011
    Location:
    Adelaide, SA
    Manufacturers: tHeRe'S nO dEmAnD for faster than 1g ethernet
    Also manufacturers: faster than 1g ethernet comes with a 1000% markup suckers.

    I see Synology have dropped their newest SOHO bigarse (tm) NAS model. Probably $2000+ AUD, faster than 1g is a $500 accessory for it.
     
    Last edited: Oct 31, 2021
  19. Blinky

    Blinky Member

    Joined:
    Jul 4, 2001
    Messages:
    3,961
    Location:
    Brisbane
    You forgot to close your sarcasm tags </sarc>

    Seriously though, they are changing motherboards and routers specs. It's slow but it is there.

    I did the unthinkable the other night, I fear the flaming if I tell you what I bought to add to the network but it involves a two and a point five. They were cheap! is my excuse, at least I think they were cheap, it was in the early hours of the next day.
     
  20. phreeky82

    phreeky82 Member

    Joined:
    Dec 10, 2002
    Messages:
    9,816
    Location:
    Qld
    To be fair, it would be hard to justify all the workstation targeted switches when most workstations are still stuck at 1G. That's the first place to fix this.
     
    Pugs likes this.

Share This Page

Advertisement: