It seems like there's a lot of disagreement in mindsets when it comes to installing rackmount servers. There have been threads discussing cable arms and other rackmount accessories, but I'm curious:
Do you leave an empty rack unit between your servers when you install them? Why or why not? Do you have any empirical evidence to support your ideas? Is anyone aware of a study which proves conclusively whether one is better or not?
If your servers use front to back flow-through cooling, as most rack mounted servers do, leaving gaps can actually hurt cooling. You don't want the cold air to have any way to get to the hot aisle except through the server itself. If you need to leave gaps (for power concerns, floor weight issues, etc) you should use blanking panels so air can't pass between the servers.
I have never skipped rack units between rackmount devices in a cabinet. If a manufacturer instructed me to skip U's between devices I would, but I've never seen such a recommendation.
I would expect that any device designed for rack mounting would exhaust its heat through either the front or rear panels. Some heat is going to be conducted through the rails and top and bottom of the chassis, but I would expect that to be very small in comparison to the radiation from the front and rear.
In our data center we do not leave gaps. We have cool air coming up from the floor and gaps cause airflow problems. If we do have a gap for some reason we cover it with a blank plate. Adding blank plates immediately made the tops of our cold aisles colder and our hot aisles hotter.
I don't think I have the data or graphs anymore but the difference was very clear as soon as we started making changes. Servers at the tops of the racks stopped overheating. We stopped cooking power supplies (which we were doing at a rate of about 1/week). I know the changes were started after our data center manager came back from a Sun green data center expo, where he sat in some seminars about cooling and the like. Prior to this we had been using gaps and partially filled racks and perforated tiles in the floor in front and behind the racks.
Even with the management arms in place eliminating gaps has worked out better. All our server internal temperatures everywhere in the room are now well within spec. This was not the case before we standardized our cable management and eliminated the gaps, and corrected our floor tile placement. We'd like to do more to direct the hot air back to the CRAC units, but we can't get funding yet.
I don't skip Us. We rent and Us cost money.
No reason to for heat these days. All the cool air comes in the front, and out the back. There's no vent holes in the tops any more.
Google is not leaving U between servers, and i guess they are concerned by heat management. Always interesting to watch how big players do the job. Here is a video of one of their datacenter: http://www.youtube.com/watch?v=zRwPSFpLX8I&feature=player_embedded
Go directly to 4:21 to see their servers.
We have 3 1/2 racks worth of cluster nodes and their storage in a colocation facility. The only places we've skipped U's is where we need to route network cabling to the central rack where the core cluster switch is located. We can afford to do so space wise since the racks are already maxed out in terms of power, so it wouldn't be possible to cram more nodes in to them :)
These machines run 24/7 at 100% CPU, and some of them have up to 16 cores in a 1U box (4x quad core Xeons) and I've yet to see any negative effects of not leaving spaces between most of them.
So long as your equipment has a well designed air path I don't see why it would matter.
Don't leave space if you have cool air coming from the floor and also use blanks in unused u space. If you just have a low-tech cooling system using a standard a/c unit it is best to leave gaps to minimize hot spots when you have hot servers clumped together.
I have large gaps above my UPS (for installing a second battery in the future) and above my tape library (if I need another one). Other than that I dont have gaps, and I use panels to fill up empty spaces to preserve airflow.
I wouldn't leave gaps between servers, but I will for things like LAN switches - this allows me to put some 1U cable management bars above and below... but it's definitely not done for cooling.
Every third, but that's due to management arms and the need to work around them rather than heat. The fact that those servers each have 6 Cat5 cables going to them doesn't help. We do make heavy use of blanking panels, and air-dams on top of the racks to prevent recirculation from the hot-aisle.
Also, one thing we have no lack of in our data-center is space. It was designed for expansion back when 7-10U servers were standard. Now that we've gone with rack-dense ESX clusters it is a ghost town in there.