Lets talk Blade Servers

Discussion in 'Business & Enterprise Computing' started by QuakeDude, Dec 13, 2013.

  1. QuakeDude

    QuakeDude ooooh weeee ooooh

    Joined:
    Aug 4, 2004
    Messages:
    8,565
    Location:
    Melbourne
    Guys,

    We're currently going through an 'evaluation' process to work out whether its smart to move to Blade Technology within the near future.

    Now - we ran Blade servers a 3 or 4 years ago, and threw them out due to a bad iScsi implementation that almost kneecapped the business. So to sell blades back in to the business here is no monumental task, but here's why I think it might be a good idea:

    - We're constantly buying new 2RU servers (2CPU, 256GB ram) for the VMware farm.
    - Each server requires 2 x fibre for ethernet, and 2x fibre for storage.
    - I now have ALOT of fibre running everywhere, and we're running out of fibre ports on the Cisco 6513's (and they're not cheap to expand!)
    - I don't want to keep adding fibre switches for storage, as the design is getting messy

    The question was asked of me "well, what do other people do? Is this the way that computing is heading?", and given Blades were around 5 years ago, its sort of a hard question to answer.

    So I'd like to get everyone's opinion on Blades - specifically, the current generation of them. We're looking at the Dell M1000e chassis, and the m620 servers which seem to be a good sweet spot given our specs.

    Thoughts?
     
  2. Rubberband

    Rubberband Member

    Joined:
    Jun 27, 2001
    Messages:
    6,756
    Location:
    Doreen, 3754
    Caveat: This was for a single blade chassis in a small DC about 5 years back too.

    I used them for awhile and I really liked them for consolidation purposes and they worked really well for VMWare.

    Drawbacks are you generally have a single point of failure within the chassis itself so you would want to spread your cluster across at least two chassis. I had a bug in the fibre switch firmware take out the entire chassis =/

    Expansion is another issue in that you can become entrenched in blade architecture so to expand and maintain this you should/need to keep buying entire chassis to maintain standardisation. Upgrade cycles also become much more expensive where you are replacing entire chassis instead of discreet boxes.

    In an enterprise environment you would mix and match for flexibility, or at least we do although we seem to be buying engineered solutions now.
     
  3. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    18,315
    Location:
    Canberra
    Every time I look at Blades, I see them as purely an answer to a physical space problem - with a side helping of management, which is somewhat removed thanks to the magic of virtualisation.

    They have more extreme power requirements, more extreme cooling requirements and somewhat lock you into a specific vendor.

    If you physically have the space, then its going to be easier and cheaper to stick with 1-2U servers (although a fully populated rack of 1U's isn't going to be great).
     
  4. Jase

    Jase Member

    Joined:
    Jun 28, 2001
    Messages:
    196
    Location:
    Sydney 2081
    We use HP c7000 Blade Systems at my workplace with mostly BL460c blades in them.

    I like that it simplifies SAN FC cabling in that you cable the enclosure so that any new or existing blade that needs FC connectivity in the future is already cabled. With rackmounted systems it was always a pain to have to get electricians out to cable each server adhoc as required. It affected the lead time to provision a new server.

    What I dont like is the VC-FC modules they use to connect to the core SAN in that they have dont have any intelligence with where they insert the blade NPIV WWN. It just round robins it.

    The NPIV setup on IBM AIX VIOS is much more sophisticated, however I guess its more of a hypervisor solution rather than a blade system. So can't directly compare them.
     
  5. CordlezToaster

    CordlezToaster Member

    Joined:
    Nov 3, 2006
    Messages:
    4,080
    Location:
    Melbourne
    Have you looked at the nutanix thread?.
     
  6. username_taken

    username_taken Member

    Joined:
    Oct 19, 2004
    Messages:
    1,352
    Location:
    Austin, TX
    We deployed about 6,000 of the bl460c G7s a few years back. Good hardware, but had all sorts of problems with firmware etc, and the Emulex CNAs had terrible firmware/driver issues ( still see occasional issues caused by them ).

    Never bothered with too much with the FC stuff, we did have about a dozen chassis with the flex switches that did FCoE which were pretty good, but we found that 2x10gbps was plenty to run both network and storage ( mostly NFS, some iSCSI ) over, even for vmware etc. for our use cases.
     
  7. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    44,218
    Location:
    Brisbane
    Used blades in three different industries so far.

    * Large finance. Consolidation of physical Windows servers with VMWare. Worked well, as I/O performance wasn't important

    * Large scale web development (largest online gaming site in south-east Asia, second largest online gaming site in Europe). Consolidation of CentOS servers initially with VMWare and later with KVM when VMWare's I/O capped out. Worked well. Was quite pricey compared to other options, however.

    * VFX 3D rendering (feature film industry). Bare metal Linux. Abject failure. CPU performance and I/O performance nowhere near required, price way out of spec, generated way too much heat per CPU. Ended up costing way more than other options despite taking up less rack space (even considering physical space, specialised high desnity 2RU rackmount boxes worked out far cheaper). Other studios are coming to the same conclusion, some regretfully after substantial investment and failure.

    Moral of the story: depends on the industry and use case. (Like anything, really). It seems of CPU and I/O performance is low on your list of needs compared to consolidation and high density virtualisation, blades work well.
     
  8. C4ndl3s

    C4ndl3s Member

    Joined:
    Feb 5, 2003
    Messages:
    118
    I've just finished implementing our new infrastructure:

    M1000e Blade Chassis
    4 x M620 Blades with additional mezz Broadcomm cards
    4 x Chassis IO Aggregators
    2 x Force10 4820T switches
    1 x Equallogic PS6100x
    1 x Equallogic PS6100e

    We're an ISV, so we are predominately development. We have around 120+ virtual servers and 4-5 physicals doing a lot more then what most servers would do. Approx 60 staff.

    This new infrastructure setup is a lot easier to manage then the 6+ different model Dells we used to run, which is a huge weight off my shoulders, as I'm the only admin.

    Expansion is also really easy, which is great. No messing around with trying to find available ports on switches, everything is all managed from the CMC, plug in the new blade and alerts that are configured on the other blades are setup on the new one. It's great!

    The down side is that yes... we only have 1 chassis. But since the chassis is passive, there isn't too much of a concern. If the chassis dies however, we have mission critical support and can get a part out in approx 2 hours. (this has been deemed acceptable by the business.

    We then just need to plug in 4xblades, 4xPSU's, the 2xCMC, iKVM, 4xIO Aggs (15 mins) and then boot it up, (15 mins). So all in total we are looking around 2.5-3 hours for chassis replacement.

    With a second chassis, it would take 15 mins for boot and that's it (If no data corruption). The CMC's auto configure the management port for the blade and if your blades are of identical setup then network should run straight to it.

    The biggest problem with this new system is that the Dell is pretty crappy with support (we have mission critical support/warranty on the entire setup). We had a CPU, Mobo in 1 blade DOA, a Mezz Nic on another Blade DOA and the Chassis was replaced because slot 3 had issues with the Mezz Nic and was causing hundreds of TCP re-transmits. All in all, it took 8 weeks to get sorted. Even though this was all equipment that was DOA, we still had to go through the standard pro support channel with DSET reports and component testing.
     
  9. gwills

    gwills Member

    Joined:
    Jan 14, 2005
    Messages:
    410
    Location:
    Melbourne
    I would imagine that it would take a fairly catastrophic failure of a chassis to bring down a couple of blade servers ?
     
  10. OP
    OP
    QuakeDude

    QuakeDude ooooh weeee ooooh

    Joined:
    Aug 4, 2004
    Messages:
    8,565
    Location:
    Melbourne
    From what I've seen of the current Dell Chassis, the backplane itself doesn't really have any electronics on it as such - its more of a dumb board. They did say that they've never had a failure TO DATE of a backplane, which is re-assuring. We had at least 2 backplane failures in the IBM chassis we used to run within a 12 month period :(
     
  11. C4ndl3s

    C4ndl3s Member

    Joined:
    Feb 5, 2003
    Messages:
    118
    Don't listen to them. Our Chassis backplane was stuffed for slot 3 on delivery.

    And Yes, the chassis is entirely passive. The CMC is the controller and they are hot pluggable.

    The chassis is designed so that if there is an electronic failure, it will only occur for 1 componant Eg, Blade slot, PSU slot, IO Slot, CMC slot or iKVM. Apart from the blade slots and the iKVM the rest are redundant so no need to worry. iKVM... Rarely use it and the blade slot, well just plug it into another and away you go.

     
  12. GiantGuineaPig

    GiantGuineaPig Member

    Joined:
    Oct 23, 2006
    Messages:
    4,027
    Location:
    Adelaide
    Had IBM blades previously, did a refresh a year ago and were pretty dead-set on going away from blades altogether. Ended up going Cisco USC blades after evaluating with vendors who offered both rack mount and blade configs as they ended up being cheaper, and bandwidth between servers was much better than previous implementation (40gbit from memory).
     
  13. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,471
    Location:
    qld.au
    Dell M1000e is the best chassis I've seen at a reasonable price range and it's had quite a long life so far (unlike Dell's earlier blade systems). If you buy enough blades to start with, Dell virtually give you the chassis for free.

    I kept contemplating blades but we end up going for 1U pizza boxes due to the "bang for buck". Dell have been giving some cracking deals on the R620, it's everything the older R710's had in terms of capacity but in 1/2 the space (including 8 x 2.5" drives).

    Although some of the configuration aspects of the blades are nice, the reality is we rack the server for 3-5 years and never touch it.

    The new systems I'm deploying have combined distributed storage and compute, so blades are even further from our consideration. We're looking at systems like the C6220 for greater densities in the future, so far the pricing isn't significantly cheaper so the R620's are the best fit for now.
     
  14. STUdog

    STUdog Member

    Joined:
    Dec 18, 2007
    Messages:
    1,294
    I run a C7000 blade system full with BL460s for my LAN event

    It's currently running 112 cores with ~250GB of ram. Worked out being 100 times better than using individual dell 2900 etc servers. Plus its much easier to take a blade chassis to a LAN than about 10 other random sized servers. I also really like the integration of it as well. Faster to setup and easier to manage. Not to mention the fail over is fantastic, not just in power redundancy but also in network as well with the variety of network switch options you can have.
     
  15. memnoch

    memnoch Member

    Joined:
    Jan 4, 2002
    Messages:
    518
    Location:
    Sydnet
    I run 3 IBM blade chassis with some of latest gen blades all using storage back to a IBM XIV SAN.

    3TB+ of ram
    2500 + cores E5 xeon
    VMware :(

    Having spoken to IBM under NDA I wouldn't continue with their blade hardware. I'd recommend you speak to some vendors under NDA if you're looking for dense computing.
     
  16. PabloEscobar

    PabloEscobar Member

    Joined:
    Jan 28, 2008
    Messages:
    14,538
    For your environment, why does VMWare get a frowny face? what would you go with if given free reign of hypervisor and why?
     
  17. bsbozzy

    bsbozzy Member

    Joined:
    Nov 11, 2003
    Messages:
    3,925
    Location:
    Sydney
    I did some work recently at a very well known Sydney business that uses IBM blades and everyday they had issues with their blade chassis's, seems more hassle than benefit...
     
  18. b00n

    b00n Member

    Joined:
    May 2, 2003
    Messages:
    235
    Location:
    Brisbane
    I cant see what problem you could be having with your IBM blades as i am using HS22 series Blades in DR and IBM Flex in production

    What problems have you seen? Only issue i have had with the blades was on the HS22 series having to update the bios because the bios battery was detecting a high/correct voltage giving out alerts
     
  19. geniesis

    geniesis Member

    Joined:
    Aug 27, 2007
    Messages:
    191
    My issue with blades is the tighter packed components ends up with a chassis that uses a lot of power and produces a lot of heat.

    Without a DC that can provide adequate power and cooling, the blades end up slowing down due to heat and you just end up with a whole lot of servers with sub-optimal performance.

    That said, if you still want to go down that route, I would suggest taking a look at Cisco's UCS.
     
  20. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,471
    Location:
    qld.au
    If you're in a DC that can't provide the power or cooling then you need a new DC. If you're with a low level, budget DC then chances are you can't afford (or need) a blade system anyway.
    UCS is great if you have too much money in your pockets and need a company to take it away for you. If that's the case, please let me charge you Cisco prices for a HP / Dell system and I'll pocket the difference ;)
     

Share This Page

Advertisement: