1. OCAU Merchandise now available! Check out our 20th Anniversary Mugs, Classic Logo Shirts and much more! Discussion here.
    Dismiss Notice

When is consumer 10GbE going to happen?

Discussion in 'Networking, Telephony & Internet' started by Smokin Whale, Sep 21, 2014.

  1. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    8,733
    Location:
    Adelaide, SA
    They sure will. The only downside is if its from/to spinning rust, the seek penalty of two access threads makes the extra GbE bonus somewhat paltry.

    Solid state or high IO RAID and you're golden. So much easier than the old days :thumbup:
     
  2. DavidRa

    DavidRa Member

    Joined:
    Jun 8, 2002
    Messages:
    3,062
    Location:
    NSW Central Coast
  3. bcann

    bcann Member

    Joined:
    Feb 26, 2006
    Messages:
    5,893
    Location:
    NSW
    they've been under <$1K for a while now if you looked. apparently only drawback is the fans are as noisy as sin.
     
  4. OP
    OP
    Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,182
    Location:
    Pacific Ocean off SC
    Main fileserver has a few storage space pools (15TB worth) and around 1TB SSD storage. Sometimes get up to 10 clients hitting it at once, either grabbing or writing big chunks of data. The load is balanced between the two links. I've seen up to 3.5Gbps total network activity (up and down) under heavy load. Then it gets backed up to another NAS overnight but that can only go at 1Gbps. The spinning rust is definitely a bottleneck for small reads and writes but the network is a big bottleneck for sequential stuff and anything off SSDs.
     
  5. Luke212

    Luke212 Member

    Joined:
    Feb 26, 2003
    Messages:
    9,826
    Location:
    Sydney
    where can you get XS708T for <1K I can see 1500AUD usually?
     
  6. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    8,733
    Location:
    Adelaide, SA
    "T" has only just been announced. The baby XS708"E" variant with povo management features has been sub $1k AUD for a while.

    I'm hoping the 24+4 variant switches (like the S3300-28X) start to drop in price. They're really useful.
     
  7. bcann

    bcann Member

    Joined:
    Feb 26, 2006
    Messages:
    5,893
    Location:
    NSW
  8. OP
    OP
    Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,182
    Location:
    Pacific Ocean off SC
  9. ohfor88smiley

    ohfor88smiley Member

    Joined:
    Jul 18, 2004
    Messages:
    123
    Location:
    SA
    would 4gb or 8gb fibre work ? are starting to drop in price around $600 on ebay or around $1000 for a 24x 1gb swtich with 4 10GbE Port.
    some times to you can get a switch cheap and then get a 4x 10gb module


    i was expecting 10gb stuff to have doped more by now as i picked up a new HPE FlexFabric 5820X 24XG SFP+ Switch (JC102B) 24x 10GB SFP+'s for $750 and 24x SFP+ modules for $550 of eBay over a year a go and just after looking again it seems i got more of a good deal then i realised.
    i use intel x520-sr2 with fibre and they seem good (never tried twinax )
    10GB is awesome [​IMG]
     
    Last edited: May 17, 2016
  10. ohfor88smiley

    ohfor88smiley Member

    Joined:
    Jul 18, 2004
    Messages:
    123
    Location:
    SA
    crap hey but i hope they will get it working again nicely

     
    Last edited: May 17, 2016
  11. Zardoz

    Zardoz Member

    Joined:
    Jun 28, 2001
    Messages:
    2,170
    Location:
    Melbourne
    Bump! Super exciting to see an NBASE-T NIC. Has anyone had a chance to play with one of these? Wonder how well it'll work with the Cisco multigigabit 3850/3560CX series.
     
  12. ohfor88smiley

    ohfor88smiley Member

    Joined:
    Jul 18, 2004
    Messages:
    123
    Location:
    SA
  13. OP
    OP
    Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,182
    Location:
    Pacific Ocean off SC
    Alright, putting together a new 6x7m office and adding around 2TB of SSD storage to the server pretty soon, so a faster network is on my radar now. Ideally looking to have the server and 2-3 workstations on 10GBe or something close, since I'm not sure that link aggregation will give me the best performance for single threaded load (do a lot of drive imaging straight off the server onto SSDs). Hoping to be as cost effective as possible. I'll have an opportunity to be doing some wiring as well. Should I stick to copper or do something funky like Infiniband?
     
    Last edited: Jul 3, 2016
  14. fad

    fad Member

    Joined:
    Jun 26, 2001
    Messages:
    2,393
    Location:
    City, Canberra, Australia
  15. OP
    OP
    Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,182
    Location:
    Pacific Ocean off SC
    Hmm, was hoping to accomplish this for under 1k if possible. Otherwise I might just hang tight and go for a 4x1GB LACP setup for the server and a few 2x1GB LACP setups for the workstations as that should satisfy our bandwidth requirements for now. I already have the equipment ready to go that can do that, but that does rely on Microsoft fixing it in Windows 10 Pro as that is our primary OS around the office.

    Edit: Every time I look into LACP, it looks like it's still not possible to achieve more than 1GBit in a single threaded transfer, irrespective of the underlying OS and drivers. Does that sound about right?

    Edit 2: Interesting switch I found: http://www.ebay.com.au/itm/Trendnet...154851?hash=item5b26832ce3:g:JOgAAOSwmtJXX4mF

    24 Port 1G + 4x 10G SFP+ for ~$550 new. I could work with that.

    Edit 3: 10G SFP PCIe cards for ~$55 a pop: http://www.ebay.com.au/itm/Mellanox...428428?hash=item5b24ccd84c:g:fVgAAOSwDNdVpaWx

    Now for a nice noob question, where does one obtain reasonably priced cables for an SFP+ connection? I'm seeing prices of $30 for 2m, does that sound right?
     
    Last edited: Jul 3, 2016
  16. DavidRa

    DavidRa Member

    Joined:
    Jun 8, 2002
    Messages:
    3,062
    Location:
    NSW Central Coast
    Yes, I believe you're right. But what might save you is SMB multichannel - independent NICs in each workstation, independent NICs in the server. Same subnet. Basically configure each machine with multiple NICs and IP addresses, and let Windows do its thing.

    Not sure if it's needed any more, but you might want to investigate [GS]et-Smb[Client|Server]Configuration commands in PowerShell to check your config.
     
  17. OP
    OP
    Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,182
    Location:
    Pacific Ocean off SC
    That's what I have at the moment on my server, and I can't prove it because I only have it on one system, but I believe it tops out at 1Gbit for a single transfer. I had a look into those commands earlier and they are currently broken for me, so I must not have got the hotfix yet. Others have reported the same on the Intel community.
     
  18. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    8,733
    Location:
    Adelaide, SA
    That kind of relies on MS thinking that its broken instead of a feature.

    That'll get you the 24+2x10G copper switch.
    http://www.netgear.com.au/business/products/switches/smart/S3300-28X.aspx#tab-techspecs

    This might be the saving grace, with the caveat that systems on both ends have to support it. Its also pretty simply to have a bash at.
     
  19. OP
    OP
    Smokin Whale

    Smokin Whale Member

    Joined:
    Nov 29, 2006
    Messages:
    5,182
    Location:
    Pacific Ocean off SC
    Yeah, they haven't made NIC teaming easy.

    2x10G isn't enough. Need 4x10G minimum if I'm going to go down that path, already linked a switch that can do that for only $550.

    Fair point on SMB multichannel, I'll have a go at that sometime soon.
     
  20. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    8,733
    Location:
    Adelaide, SA
    Have a closer look - that model has 2x 10G copper and 2x 10G SFP+.
    I figure that makes 10G SFP+ to the (hopefully) very nearby server cheap and facilitates 2x 10G copper to the workstations over cheap & robust cat6/7 cables.

    Are we back to the days of teaming being a server OS only capability?
     

Share This Page

Advertisement: