Hi all! I am hoping to have a general discussion here about power consumption issues in the DC... I am looking for tips/input as to how I can get the most computing power and RAM into the limited power allocation I have available. I started off a few years ago with a single 1RU server where I just paid for the rack unit but have since upgraded to 10RU. Now I have a problem where although I have physical space I do not have more power available and my colocation provider has no space or power left in the centre for me to expand further. I cannot just expand into another DC as I have a large range (a /22 and multiple /25s and some smaller networks as well) of IPs routed to my systems which are shared amongst all the servers and move around a bit. I have to optimise what I have now and/or wait till my provider has other clients move out or they can provide more power to my allocated physical space. I have 5A of power and am using 4.9A (1.2kw) with the following systems. All the servers are 1RU except for the IBM which is 2RU. 1x 16 port (unmanaged/dumb) gigabit switch a) 1x Dell R310, Intel Xeon X3440, 32GB RAM (4 DIMMS), 4x 2.5" 10K rpm SAS drives b) 1x Dell R310, Intel i3 550, 8GB RAM (4 DIMMS), 4x 3.5" 7200 rpm SATA drives c) 1x Dell R415, Dual AMD Opteron 4274 HE, 32GB RAM (4 DIMMS), 2x 2.5" 10K rpm SAS drives, 2x 3.5" 7200 rpm SAS drives, Dual PSUs d) 1x IBM x3650, Dual Xeon E5430, 24GB RAM (12 DIMMS), 2x 3.5" 7200 rpm SAS drives, 4x 3.5" 15K rpm SAS drives, Dual PSUs e) 1x Dell R210II, Intel Xeon E3-1240v2, 32GB RAM (4 DIMMS), 2x 2.5" 10K rpm SAS drives They're listed in the order that they were installed but the RAM in the first system was upgraded recently which according to the graphs provided by my colocation provider pushed consumption up by about .7A. I was aware RAM in servers was a big power hog but didn't think it was so extreme. The system had 12GB before... All the systems except the IBM were purchased new. The IBM is ancient by computing standards and it was probably a mistake to install it as it uses more power than the first three systems combined. I suspect the two old Xeons (2007 era), the large number (12) of DIMMS and the 3.5" 15K SAS drives are a problem here. The problem is that the tools I can find on the Dell site to calculate power consumption of these systems is horribly outdated and doesn't show the current generation of servers at all. Customising the systems the way I have makes the tools less useful as well so I can't really get accurate data about consumption until I have the systems installed. I have two new Intel systems arriving soon each with a Xeon E3-1240v2 and 32GB RAM. One will have 2x 7200rpm SATA drives and the other 2x SSDs (which from what I understand use significantly less power than spinning disks). I will get the consumption of these systems measured separately before they are installed so I can gauge whether it's worth moving just to SSDs in future. I am primarily doing virtualisation stuff (KVM and OpenVZ, Linux and FreeBSD only) so getting as much RAM as possible into my systems is my priority. Due to the nature of the services offered I rarely have CPU or IO bottlenecks. If the two new systems leave me with some power capacity left over (after removing the old IBM) I am hoping to upgrade the R415 to 64 or 128GB of RAM before having to put another whole system in. It has 4x 8GB sticks now. If I upgrade it I guess replacing them all with 16GB sticks will be a better option than simply adding another 4 sticks but I am not sure about power consumption with higher density sticks. For example it has this installed now: 32GB Memory (4x8GB), 1333MHz Dual Ranked LV RDIMMs 2 Processors I can get these Kingston modules: 16GB 1333MHz Reg ECC Quad Rank x8 Low Voltage Module or: 16GB 1333MHz Reg ECC Quad Rank x4 Low Voltage Module From what I understand the x8 uses less power. Would 4 of those 16GB sticks use less power than the currently installed RDIMMS? Spending a few hundred more on sticks of RAM is worthwhile as the hardware costs are dwarfed by my monthly colo bill anyway.