I've used HP DL machines at work. I found them to be blazingly fast, but very expensive - up to $15k,
I am curious though, although the spec we typically used (e.g. dual AMD Opteron 2.6GHz, 8 or 16GB RAM) was good - it was not so far of the 'headline' (shop window) specs of many a desktop machine that I have used. For example, I am now using a commodity machine which has 4G RAM, Dual Core Intel 2.8Hz, and costs ~ $400.
However, the DL was clearly much much faster. My reference is compiling a similar code base which on the DL might take a couple of seconds, and on the commodity hardware 10 seconds (assume the machines were doing nothing else, so minimal load factor & ram usage).
So my question is; given similar headline specs (RAM & CPU), what is it about the DL's build and architecture that makes it so much faster than a commodity machine.
Or phrased more simply, given a set CPU and RAM, what are the other server architecture & component features that significantly influence its performance?
Well something that compiles in a couple of seconds vs 10 is hardly a good test.
The main difference is most likely going to be disk subsystem in this case. Your commodity hardware probably has a commodity 7200 rpm SATA drive in it. Where the enterprise class HP DL has multiple enterprise class 10k/15k rpm SCSI or SAS drives in a raid of some sort.
Also how are you defining similar hardware? Unless you also have dual 2.6GHz Opterons in your commodity box then you're really comparing apples to oranges.
The biggest reason why people like HP can make faster servers is that they build complete servers rather than combining components. By building motherboards with particular memory, CPUs and other devices in mind, they can tweak them for those particular components, rather than having to deal with everything out there, and can drive them at the edges of the specifications. They have significant in-house technical experience on building the best server they can, compared with someone who just uses an off the shell motherboard. They also build them with server usage in mind, so like 3dinfluence commented, they use faster drives.
The reason they are expensive is that they have to cover the cost of the R&D that they do, plus HP DL servers are seriously well designed for data centre usage. The quick fit rails are amazing. In-built lights out management is a godsend. They might be more expensive, but they're worth every penny.
Most of the other answers here have addressed the management features of an HP server, but not so much the reasons you would see a performance difference. HP-specific features like iLO and Intelligent Provisioning are wonderful, but they do not make your compile times any faster.
First off make sure you are really comparing apples to apples on the CPU/RAM.
Identical models of processors with the same clock speed, cache size, and miscellaneous features (Turbo boost, Hyperthreading, etc). There can be vast variation in performance across the various processor models, even if they appear to have the same clock speed and number of cores!
Check the speed of your memory, not just the size. How that memory is arranged (e.g. NUMA in a dual socket system) will also affect performance.
The next thing that would affect the likely affect performance of a compile workload is the disk subsystem. Your typical workstation is probably using a single 7200rpm SATA disk (maybe an SSD). Servers tend to be using higher performance drives (10k-15k SAS) with a RAID array. The RAID controller usually has a dedicated cache that can boost performance. Plus a RAID allows disk operation to be parallelized, which can further boost performance.
The other thing to keep in mind is that servers tend to use a chipset that provides a lot more I/O throughput. A server chipset will often have several "wide" PCI Express links (x16, x8), whereas a desktop chipset will usually have mostly x1 PCIe links. A server chipset is designed to be used with multiple high-bandwidth 10gigE and Fibre Channel controllers. Desktops rarely have any peripheral cards in them except for a maybe a video card. This could come into play for a network-intensive application. However, I sincerely doubt this would have any impact on the workload you described (compilation).
Note that none of the things I listed above are HP-specific. If you built a Supermicro server with the same specs as an HP server (CPU, RAM, disk, chipset), it would perform the same and would be significantly cheaper. It would be missing HP's proprietary value-added features, however.
I think to really get a meaningful answer as to why the HP server was faster, I would recommend doing the following:
Use a benchmarking tool like SiSoftware Sandra that can benchmark the various subsystems of a PC (CPU, RAM, Disk, etc). Compare the scores of each subsystem on the HP server to your PC. If you see one subsystem that scores significantly higher on the HP server, that probably explains the difference.
Use the Windows Performance Monitor to profile your system's workload while it is compiling. Figure out if the limiting factor on your system is CPU, RAM, or Disk.
Get an exact list of components in your server and PC and compare them. Identify differences.
iLO and reliability.
iLO lets you setup, maintain and fix huge farms of remote servers almost infinitely easier and better than any 'home made' server without similar functionality. Seriously it's that important.
Over the years I've deployed a few thousand HP or Compaq DL-class boxes, I've never seen a single DOA nor a non-PSU-or-disk failure in the several thousand machine-years they've been operating.
I can't be clearer than that, although I'm a 99%-blade guy these days (which aren't as reliable but fix me lots of other problems) I wouldn't hesitate to buy DLs again as they make me sleep better.