I'm a server guy, specifically a blade guy and know my high-end gear well but I've got a new little challenge that I'm a bit lost on and you may be able to help.
I have a server spec that will do a specific job, I've looked at virtualising it and for dull reasons it's not going to work (not just yet anyway).
The machines have to have;
- dual core consumer processor with quite low power/heat (I'm thinking one of the newer i3's perhaps?)
- no more than 4GB of regular desktop memory
- two PCIe slots - one capable of taking a single-slot x16 GPU (not a 480 or similar, more along the lines of a 9800GT or 240 etc. - about 150W max), the other slot is for a custom low power DSP
- Built in regular dual-channel sound card
- a single GigE PXE/iSCSI-bootable NIC
- I don't need any USB, keyboard, mouse, sound I/O, SATA/PATA or DVD/hard-disk at all.
- They'll be running either XP or W7 (32-bit).
Now I need to get as many of these into a data centre as possible, I also want them as inexpensive as possible too (given the base specs), something I'm not normally bothered about therefore no expert in. I'll need several hundred or thousands of these machines, and have around an 8KW-per-rack limit (this could go up a little).
I normally use HP blades but even their BL2x220's work out very expensive and don't give me exactly what I need anyway. I looked at SGI/Rackable but they're all server-oriented with Xeons etc.
What are your thoughts? I know this isn't directly server related but it is for professional reasons. Thanks for your help.
If you are building quite a few of them you might find designing your own case using something like http://www.protocase.com might be the way to go. I believe this is the route BackBlaze took: http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
Find a consumer or low end server motherboard with your needs and wrap a case around it. If your cards are short enough you might be able to get them into 2U. (I don't think I've seen right angle adapters for PCI-e).
You might also talk to your vendor, if you are a good customer and you need a few of these thing you might be able to get a custom option setup, especially if the vendor decides other customers will want something similar.
How about taking the approach google takes (or at least took at one point). Skip the case, just get a motherboard + needed components and some shelves with some sort of insulation. On a normal 42U 4 post rack i'd imagine you could get 4 or 5 mini-itx type boards on one shelf.
If it weren't for the sound card requirement Dell M610x's in an M1000e chassis might be an option. You're going to get the aggregate benefits of more effective power\cooling from a (fairly) modern blade chassis and the M610X will give you two full length PCIe x16 slots that can support the power draw you need. They are full height blades though so not really all that dense but your power budget would kill most denser solutions anyway I think. The main drawbacks are that sound card issue, cost of course as these are decent enough dual socket servers, overall density isn't great at 8 servers in 10u. The 610 platform is overkill for your CPU\RAM requirements but you can configure them with a single low end low power Xeon 5600 and 4GB RAM rather than twin X5690's and 192GB and with those components they are pretty skimpy on power consumption.
You can certainly get PXE\iSCSI boot from the on board Broadcoms, diskless config is fine too. Since your overall power and IO requirements are minimal you can cut costs down on the chassis by opting for 4 PSU's and just a single Gigabit Pass through module rather than a switch.
They do have an internal USB port so it might be possible to meet the sound card requirements with a small USB device. Dimensions are limited to 15.9 mm wide x 57.15 mm long x 7.9 mm and there may be a limitation that this is only for USB storage, I'm not sure.
I'd be surprised if you couldn't configure the stack with <300 Watts per server (and maybe a lot less) under full load even with your PCI cards at full draw so you should be able to get 24 of them in per Rack with that 8kW power budget.
And even though you say you don't need KVM having the centralised chassis management\iDRAC and full plug and play (MAC addresses managed by the chassis) at the server level options for something at this scale is surely a plus.
To be honest though if you are heading into the thousand range then a more bespoke solution might be in order, it's definitely the best way to get a really power efficient solution and one that hits all the requirements without any fudging.
The Dell Prevision R5400 is a rackmounted workstation but it uses Xeon processors and ECC RAM, maybe you could make a couple of compromises given the highly specific nature of your requirements you'll struggle to find a product that exactly meets your requirements unless you build them yourself which might be your best solution. Tyan Tank 1u barebones units are well priced and support a wide range of processors.
Is there a specific reason why you don't use AMD CPUs?
While both have similar performance. Additionally both the CPUs and mainboards are cheaper.
Are PCI-e slots really needed ? (It's almost impossible to find this with this density)
If not, you should take a look at the Dell Fortuna : http://en.community.dell.com/dell-blogs/direct2dell/b/direct2dell/archive/2009/05/19/dell-launches-quot-fortuna-quot-via-nano-based-server-for-hyperscale-customers.aspx
Rackable/SGI also do customizations (chipset, cpu, etc.), if you're ordering thousands of those then that shouldn't be a problem - i'd talk to them about your specific needs.