Power consumption in the data centre

Discussion in 'Business & Enterprise Computing' started by oli, May 13, 2013.

  1. oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,266
    Location:
    The Internet
    Hi all! I am hoping to have a general discussion here about power consumption issues in the DC...

    I am looking for tips/input as to how I can get the most computing power and RAM into the limited power allocation I have available. I started off a few years ago with a single 1RU server where I just paid for the rack unit but have since upgraded to 10RU. Now I have a problem where although I have physical space I do not have more power available and my colocation provider has no space or power left in the centre for me to expand further.

    I cannot just expand into another DC as I have a large range (a /22 and multiple /25s and some smaller networks as well) of IPs routed to my systems which are shared amongst all the servers and move around a bit. I have to optimise what I have now and/or wait till my provider has other clients move out or they can provide more power to my allocated physical space.

    I have 5A of power and am using 4.9A (1.2kw) with the following systems. All the servers are 1RU except for the IBM which is 2RU.

    1x 16 port (unmanaged/dumb) gigabit switch

    a) 1x Dell R310, Intel Xeon X3440, 32GB RAM (4 DIMMS), 4x 2.5" 10K rpm SAS drives
    b) 1x Dell R310, Intel i3 550, 8GB RAM (4 DIMMS), 4x 3.5" 7200 rpm SATA drives
    c) 1x Dell R415, Dual AMD Opteron 4274 HE, 32GB RAM (4 DIMMS), 2x 2.5" 10K rpm SAS drives, 2x 3.5" 7200 rpm SAS drives, Dual PSUs
    d) 1x IBM x3650, Dual Xeon E5430, 24GB RAM (12 DIMMS), 2x 3.5" 7200 rpm SAS drives, 4x 3.5" 15K rpm SAS drives, Dual PSUs
    e) 1x Dell R210II, Intel Xeon E3-1240v2, 32GB RAM (4 DIMMS), 2x 2.5" 10K rpm SAS drives

    They're listed in the order that they were installed but the RAM in the first system was upgraded recently which according to the graphs provided by my colocation provider pushed consumption up by about .7A. I was aware RAM in servers was a big power hog but didn't think it was so extreme. The system had 12GB before...

    All the systems except the IBM were purchased new. The IBM is ancient by computing standards and it was probably a mistake to install it as it uses more power than the first three systems combined. I suspect the two old Xeons (2007 era), the large number (12) of DIMMS and the 3.5" 15K SAS drives are a problem here.

    The problem is that the tools I can find on the Dell site to calculate power consumption of these systems is horribly outdated and doesn't show the current generation of servers at all. Customising the systems the way I have makes the tools less useful as well so I can't really get accurate data about consumption until I have the systems installed.

    I have two new Intel systems arriving soon each with a Xeon E3-1240v2 and 32GB RAM. One will have 2x 7200rpm SATA drives and the other 2x SSDs (which from what I understand use significantly less power than spinning disks). I will get the consumption of these systems measured separately before they are installed so I can gauge whether it's worth moving just to SSDs in future.

    I am primarily doing virtualisation stuff (KVM and OpenVZ, Linux and FreeBSD only) so getting as much RAM as possible into my systems is my priority. Due to the nature of the services offered I rarely have CPU or IO bottlenecks. If the two new systems leave me with some power capacity left over (after removing the old IBM) I am hoping to upgrade the R415 to 64 or 128GB of RAM before having to put another whole system in. It has 4x 8GB sticks now. If I upgrade it I guess replacing them all with 16GB sticks will be a better option than simply adding another 4 sticks but I am not sure about power consumption with higher density sticks.

    For example it has this installed now:
    32GB Memory (4x8GB), 1333MHz Dual Ranked LV RDIMMs 2 Processors

    I can get these Kingston modules:
    16GB 1333MHz Reg ECC Quad Rank x8 Low Voltage Module or:
    16GB 1333MHz Reg ECC Quad Rank x4 Low Voltage Module

    From what I understand the x8 uses less power. Would 4 of those 16GB sticks use less power than the currently installed RDIMMS? Spending a few hundred more on sticks of RAM is worthwhile as the hardware costs are dwarfed by my monthly colo bill anyway. :p

    [​IMG]
     
  2. rainwulf

    rainwulf Member

    Joined:
    Jan 20, 2002
    Messages:
    4,213
    Location:
    bris.qld.aus
    What power saving systems have you enabled? like C1E and power step?

    Also, that AMD system will be drawing a fair amount of power.
     
  3. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,406
    Location:
    qld.au
    Interesting discussion, you'd hate to see the current draws of the systems I manage :) Our rack densities are quite low so the power draw isn't too high in a "per rack" scenario, especially considering the fact that we have 2 x 32A available to each.

    To give you a bit of a rough indication of power draw, here's two of our typical servers and the power usage.

    Brand new Dell R620 (2 x Xeon HC, 64GB RAM, 8 x 300 2.5" 15k drives)
    Very low current utilisation, so it's idling along at 126watts. Peak today (system updates / reboots) has been 366 watts.

    2nd Gen Dell R710 (2 x Xeon QC, 6 x 300G 3.5" 15k drives, 32GB RAM)
    50 (all small Linux) VM's, medium level utilisation - currently averaging 228W (peak 355W). Average across our R710's seems to be around 250w average.

    Both servers are running dual 980W power supplies.

    The jump in power usage for the RAM doesn't seem right, 0.7A increase works out to be roughly 30W a stick for the new 6 dimms (assuming you went from 6-12). This is absurdly high, even for an older generation system.
     
  4. Primüs

    Primüs Member

    Joined:
    Apr 1, 2003
    Messages:
    3,357
    Location:
    CFS
    To be honest, we have the same problem. Not so much about what capacity is available, but keeping costs down in a facility we own.

    We found, from our own testing and the experience of one of our Co-Lo customers, that putting a shelf in with 6 Mac Mini's, specifically configured in this instance to work as a HA & LB cluster, draw as much power as a single HP DL360.

    For this, you are getting 6x quad core processors, with 8 or 16GB RAM in each.

    We have just started moving away from beefy boxes to clustered Mac Minis.

    Obviously, it all depends on your exact application requirements and whatnot, but in this instance it was a large web-app host that could be comfortably configured to use all resources distributed. Some other structures may require single machine with high power.

    Our next test is using these Mac Mini's as either ESXi or XenServer front-ends, and only having a beefier higher powered SAN.
     
  5. OP
    OP
    oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,266
    Location:
    The Internet
    Probably whatever the defaults are in the BIOS. I just had a quick look via the DRAC at one of them but cannot see anything relating to powerstep or C1E in there. I can't access the BIOS remotely to verify these things either unfortunately. And yes after the IBM I am sure the dual Opteron uses the next highest amount of power.

    As far as I understand it the 2.5" drives use significantly less power though your older R710's using less power than the newer ones is probably because of the CPUs and amount of RAM. How many DIMMS are there in total in those systems?

    I agree that the RAM jump is strange though I can't rely completely on my graph as the system loads have moved up a lot recently as well. The Opteron system in particular basically sits at load average of 5 all the time (16 cores so this is quite OK I think).

    2x 32 A to a rack is a lot. I cannot get more than 5A allocated to 10RU which I guess means ~20A per rack. I don't know what kind of environment you are in though - this is a commercial DC where customers colocate their gear. Also I pay for power usage (my business) so even if more power was available this discussion would occur since it's getting expensive.

    Interesting that you found that a solution but in my case that isn't a viable option. I need proper server systems with remote access cards and so on as I am not even in the same country as the servers and remote-hands are a last-resort. I guess if you have clustered Mac Minis you have more convenient physical access to your systems. :)
     
  6. Urbansprawl

    Urbansprawl Member

    Joined:
    May 5, 2003
    Messages:
    535
    http://www.dell.com/Learn/us/en/usc...or?c=us&l=en&s=corp&redirect=1&delphi:gr=true

    Enjoy:)
     
  7. OP
    OP
    oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,266
    Location:
    The Internet
    Hmmm, thanks. I have seen this before but the one I saw did not have all the current servers. Strange.

    I just went and specced up my R415 for examples sake the same way as mine is configured and the tool thinks it'd use 1.5 amps though. This obviously is either incorrect or the graphs from my provider are incorrect since I had this server and the two others (A and B in my explanation) running together using about 1.1 amp all up?? :/
     
  8. Urbansprawl

    Urbansprawl Member

    Joined:
    May 5, 2003
    Messages:
    535
    There is a load variable in the tool which can make a big difference, how did you set it?
     
  9. OP
    OP
    oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,266
    Location:
    The Internet
    I tried at 40% and up.

    :p

    [​IMG]
     
  10. cvidler

    cvidler Member

    Joined:
    Jun 29, 2001
    Messages:
    11,682
    Location:
    Canberra
    @OP,

    Consolidate hardware.
    One 'big' server capable of running all the same systems as the 5 'little' servers will consume less power overall. This comes from efficiency. e.g.
    - only 1 power AC-DC conversion (or two for redundancy), less than the 5-10 you have now
    - newer gear is more power efficient, low power CPUs and RAM or options (but all newer generation parts use less power anyway), 2.5" drives consume less than 3.5" drives, SSDs even less again
    - virtualisation, or just consolidation of function means the same hardware is more greatly utilised, less power is used just ticking a idle server over.

    From what I can see, you've got 5 low spec servers, all of which could probably be replaced by one (or two if you needed redundancy) say a R720 (since you seem to like Dell). 2RU, two 75W Xeons (8 core each), stick in a heap of RAM (768GB max).
     
  11. OP
    OP
    oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,266
    Location:
    The Internet
    I don't have as much redundancy or flexibility if I have less systems though. I am running two different virtualisation platforms and having at least two servers running each (OpenVZ and KVM) means I can move things around as customer requirements change or resource allocations need adjustments. That basically necessitates me having 4 separate systems.

    As nice as huge beefy systems are they are expensive and being a small business cashflow is also an issue. Spending $2,000 every 6 to 12 months on a new server is a lot easier than somehow finding $10K+ for some monster to replace multiple systems at once.

    Understand your points though especially relating to older hardware: The new servers arrived today so starting tomorrow when one of them gets racked I'll start migrating customers from the old IBM system so that system can be retired.

    Thanks :)
     
  12. cvidler

    cvidler Member

    Joined:
    Jun 29, 2001
    Messages:
    11,682
    Location:
    Canberra
    Fair point, no mention of budget was made, nor the multiple VM requirements (any real reason for this?).

    But new server > old server.
     
  13. OP
    OP
    oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,266
    Location:
    The Internet
    The business sells VPS services so there's a very large number of virtual servers running across all the systems. :)
     
    Last edited: May 13, 2013
  14. Urbansprawl

    Urbansprawl Member

    Joined:
    May 5, 2003
    Messages:
    535
    Interesting. 1.1 amps for three servers seems suspiciously low, that's less than 100 watts per server. The Dell calculator is usually pretty good compared to what we see from our boxes (Dell R620s and R820s mostly).
     
  15. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,406
    Location:
    qld.au
    I expect once we have a greater level of utilisation that the figures will become similar but still lower. RAM is 8 x 8GB DIMMS (HMT31GR7EFR4A-H9 is the part number according to the DRAC).
    The benefits of hosting our gear in a very good data centre that's only a few years old, they have ample power available. We currently don't draw anywhere near the full capacity and we'd certainly be paying for the privilege if we did. Knowing that there's plenty of spare capacity (power and rackspace) does at least give us headroom for growth as required.

    Consolidation wise you may want to look at the Parallels Cloud Server, which allows hypervisor based and container based (ie the commercial OpenVZ offering) virtuals on the same server. It also includes rebootless kernel updates as well as file / memory dedup, so you can get achieve density out of the one box.
     
  16. OP
    OP
    oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,266
    Location:
    The Internet
    Weird. I can't really explain it though with all the power calculators in the past that I have seen (from other manufacturers as well) they always say that systems will use a lot more power than they really do.

    Yeah I have looked into that before as well but overall the platform I am using now to manage OpenVZ and KVM systems is quite good and the density I can achieve now is pretty acceptable I think. I am deploying more KVM nodes anyway and probably only one more OpenVZ as the KVM systems are more popular with my customers. KVM has KSM as well though I don't think it's particularly effective. :(
     
  17. cvidler

    cvidler Member

    Joined:
    Jun 29, 2001
    Messages:
    11,682
    Location:
    Canberra
    The calculators usually spec worst case consumption.

    Which allows you to safely spec power and cooling requirements.

    Few servers however run at 100% 24x7.
     
  18. OP
    OP
    oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,266
    Location:
    The Internet
    Yes but in the case where the potential buyer is putting gear into a colocation facility this makes them pretty useless. I mean if I was just starting out and had no idea I'd be sitting here thinking "The colocation provider can give me 10RU with 5A but according to Dell I probably can't put more than 3 mid-range 1RU servers in there".

    Oh well... Tomorrow I will get measurements from the new Intel system. :)
     
  19. Munity

    Munity Member

    Joined:
    Jan 13, 2006
    Messages:
    363
    thought of starting to plan to move to another location? i know it would be a lot of work, but what happens if business picks up? you dont want to be limited forever.
     
  20. leighr

    leighr Member

    Joined:
    Feb 28, 2002
    Messages:
    558
    Location:
    Richmond, Melbourne
    The flipside of that is if you spec for typical, what happens if you hit a worst case? Eg, typical usage may be 1A, with worst case 2A. If I put 10 servers on a 15A circuit, I'm fine most of the time. But if something occurs to drive them all up at the same time (simultaneous AV scan, all power on at once, etc) I'll trip a circuit breaker.
     

Share This Page