Okay, our datacenter doesn't charge us based on our "useage" just on how many Amps we have delivered to our rack. I'm planning on putting in a new rack this upcoming year for some server expansion, based on This article. Each server has a crapload of drives, and in turn, uses a crapload of power.
This rack will include.
1 Cisco Router.
1 Fiber Switch
1 1U Storage Controller, standard Dell 850 server.
11 3U Storage Servers, each with dual 750-watt power supplies.
So... I'm trying to calculate how many amps i should theoretically need. I dont have an example server yet to see just how much a 3U - 45-drive server pulls. So i figure it needs two power supplies, so lets go with 1000 watts of pull. Plenty for the hard drives, plenty for the onboard stuff. 11 x 1000 watts is obviously 11000 watts of power.
A=W/V. W= 11000. V= 120v. A= 91.
This is where my brain kinda tripped out. 91 amps of power is sheer insanity. (the most i've had piped to a rack atm is 40, and its only sitting at 60% capacity.
And thats with only 1000w used per server -- if the PSU is pulling full current at 1500watts for the system, thats something to the tune of 140amps.
.... This sound right or i'm I just crazy? Surely i did something wrong, 140amps of power is insanity.
****Edit: Box is actually 4U, so numbers are different then displayed here, but not by much.
Your figures sound like they're on the right track in general, a dual disk, dual processor box like a HP DL380 pulls about 350-450w on average so add in 43 x 15-20w for the disk and you're not far off a Kw per chassis. This gives you two problems, you're going to need 4 x 32A PDUs to support dual PSUs and you've got a reasonable amount of heat coming from the rear if the rack that needs scrubbing. I also suspect that your hosting company might bitch about this and insist on you splitting your kit between 2 or more racks instead.
That said I can't think of any other way of getting that amount of disks into that density, what make/model are the 3U boxes by the way?
Edit - have you done the weight calculation yet? You should run that by them too, that sucker's going to be quite heavy.
Well the intention of dual PSU's is to provide redundancy not additional power as such so your 750Watt units look undersized to me. The server config's you describe are extremely dense - 350-350 for the basic server + 45*10-15Watts for the drives and you are looking at a kilowatt per server and you should be putting in 1.2kw PSU's that can run that with a bit of headroom.
You will also absolutely have to have some sort of phased spin-up for the drives too, start up power draw on drives is about 50% higher than peak power draw under operating load.
As far as overall power draw is concerned those numbers aren't all that insane - that's what happens when you go dense. A rack with 4 fully populated Dell M1000E Blade enclosures will draw somewhere roughly around 16 (+-5) kW which would be 130A on a 120V supply. The PSU's for those are rated to deliver 2360 watts @ 12V so the max AC draw is around 2500 watts and you need a minimum of 3 to power the chassis up. Their recommended config has 6 PSU's to provide for AC circuit and multiple PSU failures.
The second thing you want to pay extra special attention to when going dense is cooling design. At those densities you are crazy if you don't go for a hot\cold aisle layout and the racked kit needs to be designed for that. 20 kilowatts is a lot of heat in something as small as a 42u Rack, and you want to be sure that you can pump it out efficiently with some level of redundancy. Unless you're putting this rack outside in the Arctic you should make sure you have redundant [ideally hot swappable] fans capable of shunting around 150 cubic feet per minute even when one fan has failed (assuming you are cooling 1kw per server and can live with a 20deg hot::cold differential)
Finally these really are going to be heavy systems. Your server unit is going to weigh something in the range of 160lbs. With 11 of those plus everything else you're planning plus the rack itself the whole thing will quite literally weigh a ton.
The Backblaze concept benefits from being one component in an integrated solution stack that allows them to treat each server as a hot swappable unit with redundancy baked in higher up the stack in addition to the basic RAID\PSU\Fan redundancy in the storage units themselves. That allows them to use fairly cheap components and not have to put any effort into hot-swap for failed individual components - that tends to be another feature of really dense systems and you'd want to be sure that you can live with that approach.
Figures sound sane to me. We have several racks that are running 150 AMPS on 208 Volts. one beast of a rack is currenly pulling 200+ AMPS on 208 volts. Also as others have mentioned check the weight of the rack it is going to be a heavy one. a couple of our storage racks are over 1000Kg. Mind you the amps will fluctuate based on how much load is on your server. This can be from about 1 Amp to possibly 8 - 10 AMPS with inrush of 40 AMPS over a few ms however a few ms is enough time to do some nasty things if your services to the rack is under size. Also as if you needed more to think about you will need to have power sequencing on the rack so the damn thing does not try to start all at once and really suck amps.
I see that your storage controller is a Dell. If your storage servers are Dells too then you might find the Dell Data Center Capacity Planner useful. It'll give guidance on power consumption, as well as weight!
Sounds sane. For comparison: In most racks I use dual (one diesel backed, one UPS backed) 38 KVA PDUs (400V tripple phase input, each phase 32A). Remember that the power rating on the PSUs are nominal ratings and that the system is able to run with only one of it's PSUs having juice.
The calculations are right, based on the maximum rated power of the PSUs.
You'll probably find that each server actually draws between 0.5A and 3A during it's average usage. Inrush current may be as high as 40A per server, but it's for a really really short amount of time, like 20ms.
I suspect you'd be looking closer to 20-35A for the rack.. But I could be mistaken.
On another note, I think it's interesting to see someone using the backblaze method from that paper. I'd be interested to know how it turns out!
Here are some options to help you actually gauge your requirements.
On the cheap, you can hook up a Kill-A-Watt to each device and get some real world measurements. (be careful not to do this in a LAB, a loaded server carries a much higher power load than a idle server)
A server will often spike its load on initial boot. Keep that in mind. (if there's a power outage and everything comes on at one, you may spike over)
Alternatively, get a PDU like an APC 7901 (or similar device) which has a LED readout of the current AMP draw. (The 7901 is only a 20 amp model)
Our Cisco top of rack switches and fiber switches rarely go over 80W. * YMMV
Your DC may only offer you 20 and 30 amp options, so it may be moot.
Your DC may also provide you with 208V power as an option.
A joke: I think everyone getting charged for delivered power vs used power should plug in some spaceheaters to make sure you use everything you're paying for. :)
I hope this helps.
The nameplate figures on IT equipment are safety/regulatory figures, and you're going to be overbuilding if you use them to design thing. Ask your vendor for the "ASHRAE Equipment Thermal Report" for the equipment in question. Any major vendor will have it.
That report will give you some figures that will be more realistic.