Enterprise hardware, better quality?

Discussion in 'Business & Enterprise Computing' started by kripz, Jul 25, 2010.

  1. kripz

    kripz Member

    Joined:
    Sep 29, 2004
    Messages:
    2,834
    Location:
    Near Frankston
    I've had Seagate ES drives die pretty quickly, not only that, HP desktops etc. Is it suppose to be better quality or just better warranty?

    I'm guessing that these components would all be tested rather than randomly like normal consumer gear.

    Even though it's tested, it doesn't mean it's going to last longer, it just means you don't get DOA parts, do they use better components?
     
  2. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    36,982
    Location:
    Brisbane
    Fact: if it takes an electric current, it will die at some point.

    You buy enterprise gear for the guarantee that you can call someone day or night to replace it when it goes bang. Not because it will last longer than cheaper stuff.

    I've had CPU, memory and disk failures in mutli-million dollar kit mere months after purchase. ANYONE who tells me I'm buying "better quality gear" is delusional. All I'm doing is buying a decent on-site warranty, and a guarantee that I won't get inter-vendor finger-pointing by buying servers from the same manufacturer as I buy my enterprise disk.
     
  3. maddhatter

    maddhatter Member

    Joined:
    Jun 27, 2001
    Messages:
    4,798
    Location:
    Mackay, QLD.
    I think the higher failure rate is due to monitoring / usage.

    EG: many ES drives live in raid arrays, where the controller advises of failure - on a standard PC with an AS drive you probably wouldn't catch the same failure.

    ES drives are typically high use / 24x7 applications.

    I'm getting pretty tired of seagate returning re certified replacement drives though, they typically die a few months down the track again :(
     
  4. Crusher

    Crusher Member

    Joined:
    Aug 27, 2001
    Messages:
    3,069
    Location:
    Sydney & Adelaide
    With name brand gear you are paying for 3 things over yumcha computers

    1) Engineering design. A name brand and yumcha may share 95% same components. The name brand will be designed to run 24x7 (even as a desktop) within thermal tolerances. Nothing magic, they are designed to have sufficient cooling to do so

    2) Warranty. Onsite, quick and hassle free. Screwless components, chassis design to enable quick release and replace

    3) Parts availability. Most tier1 vendors will have parts for 7 years. If you buy a yumcha pc and the motherboard dies after 18 months, you are typically then up for a new mobo, new cpu and new ram, eroding any savings over a name brand
     
  5. Hive

    Hive Member

    Joined:
    Jul 8, 2010
    Messages:
    5,335
    Location:
    ( ͡° ͜ʖ ͡°)
    Most certainly enterprise stuff is mroe realible
     
  6. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    36,982
    Location:
    Brisbane
    Having built very large setups out of both "enterprise" and "commodity" hardware, the failure rate of both are near identical.

    Yes, enterprise stuff often has redundancy built in, thus appearing outwardly more reliable. But those who build systems ala Google know points of redundancy are in the eye of the beholder.

    To put it bluntly: a SAN is more reliable than a consumer hard disk, but I've seen individual FC SAS disks fail just as often as low-end SATA disks.
     
    Last edited: Jul 25, 2010
  7. Iceman

    Iceman Member

    Joined:
    Jun 27, 2001
    Messages:
    6,647
    Location:
    Brisbane (nth), Australia
    Eh, I feel this way. But it's hard to back it up with anything more than anecdotal evidence.

    Overall, I have to say I've seen server class hardware survive the test of time better than "server class" white boxes and I've had cause to run intense processes on both desktop hardware and "enterprise class" hardware and found the latter to have less unexplained errors.

    I would put it down to better quality chipsets, maybe even motherboard build quality. There's quite a lot of in depth electronic voodoo when it comes to RF/EMI, signal path timing and overall electronic tolerances that isn't paid attention to below a certain price point (And probably a lot of the time isn't above a certain price point if they can get away with it too). All of these things affect overall stability.

    Hard drives are one area that I feel there isn't a "pay more for dramatically improved quality" (performance is another matter) because basically they're all spinning voodoo which uses unicorns tears and fairy dust to read and write your data. It's a sheer miracle that those things work at all.
     
  8. Bangers

    Bangers Member

    Joined:
    Dec 25, 2001
    Messages:
    7,254
    Location:
    Silicon Valley
    I don't think this is true. Support contracts decide the turn around time before a FRU is fixed. In most accounts when buying top end Enterprise tin you are buying better quality internals. HDS Visualized storage and Fujitsu/SUN M9 series are obvious examples of this.

    Whether a 'Seagate' enterprise hard disks performs better? I don't think anyone in this forum knows or has the time to care. Enterprises don't buy 'Seagate xxxx hard disks'.

    This discussion is flawed without detailing the minimum requirements to be Enterprise kit. I consider my dozen plus of filled M9 chassis's to be real Enterprise toys. Migrating system and IO boards live, migration memory all while the domain is still running. Others might [wrongly] assume a HP x86 server that costs +=$10k over the Dell to suddenly step up rungs in reliability.

    I don't think EMC count as a SAN vendor :)
     
  9. Hive

    Hive Member

    Joined:
    Jul 8, 2010
    Messages:
    5,335
    Location:
    ( ͡° ͜ʖ ͡°)
    True but in reality you cannot be comparing one or two drives/arrays, there are to many variables to account for, bad batch, bad manufacturing, design flaw.

    Server grade hardware is generally, designed to be more reliable. And in most cases it will generally be higher quality than desktop grade hardware

    I'm not just talking about hardware not dying, but technologies which create redundancy, and error detection/correction.
     
  10. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,321
    Location:
    Canberra
    bit of both but not a lot.

    it's really horses for courses:

    would i use a desktop PC with 1 hard disk serving 50 staff:
    • In a low budget... Yes (as long as backups were done and the restoration period understood by the business)
    • The business could afford or demanded a higher level of accessibility... No

    apart from the big tangible items like redundant PSU, CPU, ECC ram, mirrored memory, RAID 1/5/10, the manufacturer is probably still using the same factory to pump out the remaining components.

    is there a trade off between hardware redundancy and software redundancy... Yes, although it depends on whether your software is capable of real-time clustering / load balancing / magic black smoke and does this model suit your processing needs and or budget.

    again, horses for courses, know your requirements before you pull out the corporate card.
     
    Last edited: Jul 26, 2010
  11. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    36,982
    Location:
    Brisbane
    And this is the point I was trying to make above.

    It's easy to confuse macro enterprise features with micro component reliability.

    Enterprise gear offers better features on a box-by-box basis. Multiple points of redundancy allowing hot operations inside a single platform. I don't equate this to better component reliability. I equate this to a higher feature set.

    In the past I've been contracted to build large render farms for the visual effects industry. Nobody in their right mind in that industry would buy bank-grade UNIX boxes. The cost per node is insanely high, and offers no real benefit to their workload.

    What you'll find is in that industry, outages on a per-node basis can be tolerated far better because of the non-realtime, and "embarrassingly parallel" nature of the workloads. As such, even the cheapest hardware is perfectly acceptable from a feature perspective. In that industry, redundancy is at the macro level (across the entire cluster). Compare and contrast to the financial industry where I'm working currently, and redundancy needs to exist at a far more micro level.

    So as I said before, it's all in the eye of the beholder. Clearly, people need to buy the right tool for the job. But I stick by my earlier statement - on a per component basis, I don't believe enterprise gear offers any more reliability than non-enterprise gear.

    If on the other hand you're going to look at macro level redundancy, then you're not comparing like for like, and the discussion is moot.
     
  12. Nyarghnia

    Nyarghnia (Taking a Break)

    Joined:
    Aug 5, 2008
    Messages:
    1,274
    That's pretty much it in a nutshell, gear is gear and these days most vendors are pretty good, it's becomming more highly commodotised, the 'big name' vendors have stronger support networks and THAT is what you're paying for, the support behind the hardware, not neccessarily the hardware itself...

    -NyarghNia
     
  13. j3ll0

    j3ll0 Member

    Joined:
    Jul 13, 2005
    Messages:
    4,706
    I'm not sure - there's examples where these things intersect.

    For example, the mounting chassiss (sp?) that you commonly find with enterprise hot swap disks include a design feature that ensures the component being added on the fly to the server is earthed properly before the connectors connect.

    I haven't encountered a similar design in the commodity PC space. The spec for commodity SATA requires a hot-swap capability, but the packaging around the disks doesn't necessarily enforce 'safety' features like the above.

    I dunno. Ever since I bumbed into the 'cyclonic' fans in the old IBM PS2 (MCA era) kit I've been more impressed with the engineering that goes into maintaining good airflows through enterprise kit than things like hot-swap RAM, CPU etc... :thumbup:
     
  14. Devils

    Devils (Taking a Break)

    Joined:
    Nov 10, 2009
    Messages:
    185
    Location:
    Brisbane
    I've dealt with a SAN failure after a simple upgrade. Engineers couldnt get it working after 36 hours so it had to be rebuilt from tape. There are single-points-of-failure in enterprise systems too.

    That being said SUN makes some very nice hardware.
     
    Last edited: Jul 26, 2010
  15. Whisper

    Whisper Member

    Joined:
    Jun 27, 2001
    Messages:
    8,297
    Location:
    Sydney
    The warranty is better on Enterprise hardware. :rolleyes:
     
  16. Devils

    Devils (Taking a Break)

    Joined:
    Nov 10, 2009
    Messages:
    185
    Location:
    Brisbane
    If happened when a new disk drive was being added, the mappings of all the drives got scrambled, so human error may have a lot to do with it.
     
  17. jastormont

    jastormont Member

    Joined:
    Aug 10, 2004
    Messages:
    1,196
    Location:
    Brisbane
    I think with Enterprise hardware you also tend to have a more reliable Driver set for the hardware as well (On supported OS's). Also the components are tested for compatibility as well. And this can be very important when deploying software across 100 plus desktops for example (SOE is important here too).
     
  18. Bangers

    Bangers Member

    Joined:
    Dec 25, 2001
    Messages:
    7,254
    Location:
    Silicon Valley
    The mapping of all the drives got scrambled? Sounds like a real Enterprise SAN and Enterprise admins! :thumbup: You point is extremely valid and definitely undermines the reliability of Enterprise hardware.
     

Share This Page

Advertisement: