10GBase-T vs SFP+

Discussion in 'Business & Enterprise Computing' started by cjzdj, Nov 23, 2018.

  1. cjzdj

    cjzdj Member

    Joined:
    Apr 14, 2011
    Messages:
    19
    Location:
    Upper North Shore NSW
    Would you use 10GBase-T or SFP+ for All Flash Storage ISCSI networking?

    SFP+ comes at approx 1.5K premium. Sounds small, but when your over budget by 10% you start to try save where possible.

    Are the differences actually noticeable?

    Thanks all
     
  2. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    34,636
    Location:
    Brisbane
    Last edited: Nov 23, 2018
  3. fad

    fad Member

    Joined:
    Jun 26, 2001
    Messages:
    2,202
    Location:
    City, Canberra, Australia
    10Gbase-T uses around 5w per port and has a latency of around 2.5usec. SFP+ is 0.7w and 0.5-0.7usec.

    When you have a lot of ports it all adds up.
     
  4. ex4n

    ex4n Member

    Joined:
    Oct 5, 2011
    Messages:
    2,100
    Location:
    Perth
    I use 10Gbase-T for a few reasons, 1 we can get dual 10GB on our mobos, 2 it works with cat6 cables, 3 our switches are already 10GB as well and 4 no need to buy any sfp modules, but mostly I think it's just easier.
    I then use additional mellanox cards for dedicated infiniband etc.
     
  5. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,348
    Location:
    Canberra
    This is pretty old. The gap is definitely less now. Still - if you have racks worth of ports - you'd care.

    I used to be a Base-T fan. Now, like all things as i do more and more in IT, well it depends.
     
    Daemon likes this.
  6. ir0nhide

    ir0nhide Member

    Joined:
    Oct 24, 2003
    Messages:
    4,052
    Location:
    Adelaide
    Optics all the way, elvis already linked FS which work just fine. I'll take a $20 generic transceiver over a $2000 official one 9 days/week.
     
    elvis likes this.
  7. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,399
    Location:
    qld.au
    Go for which ever you finds suits or even go for a hybrid switch.

    The power and latency comparisons are all bullshit too. For runs less than 3m you're using 2w or less for copper, the 5-10w is always worst case (ie, long runs). The copper ports on the heavily used switches I manage are 3-4w/port (inc switch management + L3 processing) and runs of 1-5m. For a 24 port switch, you're still looking at sub-100w power usage wheras it may be around 70w if it was all SFP+. If you're going for 400+ ports it'll make a difference, otherwise ignore the power side.

    Latency wise, the difference real world between 1usec and 2.5 is like the difference between travelling at 100 and 101km/h. Unless it's for a very densely populated storage array or some very specialised HPC clusters, there's no real world difference. In most instances your application layers are going to be tens of not hundreds of milliseconds to process, meaning the difference is lost well in the noise. This is a 10,000+ times difference to put it into perspective.

    There are of course benefits to fibre, if you have an electrically noisy environment then it's a great way to ensure there's no interference. Twinax means you can connect direct without transceivers and really not that much more than CAT6, so if you're buying all at once it's worth looking at.
     
  8. olie

    olie Member

    Joined:
    May 22, 2002
    Messages:
    868
    Location:
    Australia
    Does your cabling support 10G?
    No? Run optics
    Yes? Run copper
     
    Daemon likes this.
  9. Hive

    Hive Member

    Joined:
    Jul 8, 2010
    Messages:
    5,049
    Location:
    ( ͡° ͜ʖ ͡°)
    SFP+ all the way in DC/Core networks. If you really want 10G BASE-T buy an SFP+ module.
     
  10. tensop

    tensop Member

    Joined:
    Mar 26, 2002
    Messages:
    1,239
    are people still bothering with DAC these days? FS has brought the price of doing it optically down to basically nothing, and its a shittonne easier managing om fiber patch leads vs a DAC cable
     
  11. ir0nhide

    ir0nhide Member

    Joined:
    Oct 24, 2003
    Messages:
    4,052
    Location:
    Adelaide
    DAC between vendors can be hit/miss. Between like switches/routers, for sure.
     
  12. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,348
    Location:
    Canberra
    its because vendors are *****.

    STOP OPPRESSING ME OCAU. IF I WANT TO SEE YOU NEXT TUESDAY, I WILL.
     
    freaky_beeky and elvis like this.
  13. wwwww

    wwwww Member

    Joined:
    Aug 22, 2005
    Messages:
    4,942
    Location:
    Melbourne
    Get used 40Gb Infiniband equipment, you can probably fund a dual redundant setup with iSCSI multipath for under $1500 and will, much cheaper, much faster and more reliable (assuming your planned setup doesn't have redundancy).
     
  14. ex4n

    ex4n Member

    Joined:
    Oct 5, 2011
    Messages:
    2,100
    Location:
    Perth
    I run 10Gbps DAC between some servers, costs me about $75 off ebay for 2 sfp+ cards and a cable, so at that price it's worth it. This is only for very small setups or specific use cases, like my home lab, mostly I'm using 10G BASE-T now as I mentioned above already.
     
  15. scips

    scips Member

    Joined:
    Apr 10, 2004
    Messages:
    412
    tiny bump.

    We run hpe DACs for between the SAN>iscsi switches>hosts, only 10 of them (so around $2800 i think it was).

    Boss is a stickler for rules and spares, so we have spare DACs, SFP+s, switches, merakis etc, great for peace of mind but id pref the budget to score some sweet new gear tbh.
     

Share This Page