Im looking into the cost of putting a Cassandra cluster into a colo facility. Along these lines there would be 6-8 servers at the outset with expected growth over time. One option is just a series of Dell R320 (or similar). Another option would be blades or similarly built machines that share power.
Looking at the details of an 8 node system I see it has 4x1620 watt power supplies. This gives a total of 6480 watts. If I have a rack with 208V this means I'm pulling more than 30A at peak. So I've maxed out my 42U rack in 6U of space. I realize this is 'peak load' but it seems a bit extreme.
Am I misunderstanding how this calculation works? I get VA=W and I get that it won't pull this kind of load but 30A is a lot of current. I don't have the luxury of buying one and using a kill-a-watt to accurately measure it. The specs for the system don't make it sound like these are redundant but that's a tremendous amount of current.
Has anyone deployed blades or multi-node servers and measured the required current? I'd love to get a Dell M1000 but the prospect of trying to budget for 40A just makes me need to lie down.
EDIT If I use a kill-a-watt
to measure the input current for a system with n power supplies - do I sum them? Are they all pulling 1/n?
Yes, blades are dense. :)
You need to use a power budgeting tool to determine maximum power draw of your particular hardware configuration. Your reseller should be helping you with this. (since that's what I do :)
Multiple power supplies can have quite a few possible scenarios:
N×Wattage
power.N×Wattage
power.N×Wattage
power.N×Wattage + C
Your configuration of
4×1620W
is probably N+N, so maximum draw is around 3240W plus a bit. But check the documentation! It's also likely that each of the above scenarios is software-configurable, take note of that.Oh and by the way,
VA=W×Power Factor
.Just to note, the math is off, too.
Good.
Yes, using 1 phase.
You are off in three things:
THAT SAID: yes, blades are dense and yes, you need special racks for that and a lot more power density than a normal cheapo colocation center is willing to handle.
This is not about your power - directly. It is about your heat. When the data center is planned, they simply do not foresee people blowing 20kW in a rack, so the cooling is not there. There are special racks for up tio IIRC 30kW in a rack, but they COST and basically have their own internal air conditioning, so the heat goes out in a fluid not into the air. This will COST - and brutally speaking most data centers I have seen are not prepared for that at all.
Especially webside colocating is - ah - sometimes SO low power in their racks it is a joke and we can only hope Intel really gets the power down on computers. I can normally not fill a rack with 2 processors per unit and expect the data center to handle that, and dual socket 1u machiens are not exactly high density.
Besides the fact that I could never make sense out of this Dell thing financially - SUperMicro has cases with 2 computers / RU (check their twin fat cases) that cost a LOT less and do not require super expensive additional stuff - this IS costly. First, you will pull a LOT of power, which means a lot of additional cost and investment for cooling. Second, the power bill just will be high. Live with it. Hope for less power hungry chips ;)