From what I read and hear about datacenters, there are not too many server rooms which use water cooling, and none of the largerst datacenters use water cooling (correct me if I'm wrong). Also, it's relatively easy to buy an ordinary PC components using water cooling, while water cooled rack servers are nearly nonexistent.
On the other hand, using water can possibly (IMO):
Reduce the power consumption of large datacenters, especially if it is possible to create direct cooled facilities (i.e. the facility is located near a river or the sea).
Reduce noise, making it less painful for humans to work in datacenters.
Reduce space needed for the servers:
- On server level, I imagine that in both rack and blade servers, it's easier to pass the water cooling tubes than to waste space to allow the air to pass inside,
- On datacenter level, if it's still required to keep the alleys between servers for maintenance access to servers, the empty space under the floor and at the ceiling level used for the air can be removed.
So why water cooling systems are not widespread, neither on datacenter level, nor on rack/blade servers level?
Is it because:
The water cooling is hardly redundant on server level?
The direct cost of water cooled facility is too high compared to an ordinary datacenter?
It is difficult to maintain such system (regularly cleaning the water cooling system which uses water from a river is of course much more complicated and expensive than just vacuum cleaning the fans)?
Water + Electricity = Disaster
Water cooling allows for greater power density than air cooling; so figure out the cost savings of the extra density (likely none unless you're very space constrained). Then calculate the cost of the risk of a water disaster (say 1% * the cost of your facility). Then do a simple risk-reward comparison and see if it makes sense for your environment.
So I will break my answer in serveral parts:
Physical properties of water compared to others
First a few simple rules:
If you compare water and mineral oil versus air (for the same volume)
water is around 3500 times better than air
oil is a bad electricity conductor in all conditions and is used to cool high power transformers.
Now some comments about what I said above: Comparisons are made at atmospheric pressure. In this condition water boils at 100°C which is above the maximum temperature for processors. So when cooling with water, water stays liquid. Cooling with organic compounds like mineral oil or freon(what is used in refrigerator) is a classical method of cooling for some application (power plants, military vehicules...) but long term use of oil in direct contact with plastic has never been done in the IT sector. So its influence on the reliability of server parts is unknown (Green Evolution doesn't say a word about is). Making you liquid move is important. Relying on natural movement inside a non moving liquid to remove heat is inefficient and directing correctly a liquid without pipe is difficult. For these reasons, immersion cooling is far from being the perfect solution to cooling issues.
Technical issues
Making air move is easy and leaks are not a threat to safety (to efficiency well). It requires a lot of space and consume energy (15% of your desktop cinsumption goes to your fans)
Making a liquid move is troublesome. You need pipes, cooling blocks (cold plates) attached to every component you want to cool, a tank, a pump and maybe a filter. Moreover, servicing such a system is difficult since you need to remove the liquid. But it requires less space and requires less energy.
Another important point is that a lot of reasearch and standardization has been down on how to design motherboards,desktop and servers based on a air based system with cooling fans. And the resulting designs are not adequate for liquid based systems. More info at formfactors.org
Risks
Remarks
Cooling air reduces its capacity to contain water (humidity) and so there is a risk of condensation (bad for electronics). So when you cool air, you need to remove water. This requires energy. Normal humidity level for a human is around 70% of humidity.So it is possible that you need after cooling to put back water in your air for the people.
Total cost of a datacenter
When you consider cooling in a datacenter you have to take into account every part of it:
The cost of a datacenter is driven by its density (amount of servers per square meter) and its power consumption. (some other factors enters also into account but not for this discussion) Total datacenter surface is divided into the surface used by the server themselves, by the cooling system, by the utilities (electricity...) and by servicing rooms. If you have more server per rack, you need more cooling and so more space for cooling. This limits the actual density of your datacenter.
Habits
A datacenter is something highly complex that requires a lot of reliability. Statistics of downtime causes in a datacenter say that 80% of downtime are caused by human errors.
To achieve the best level of reliability, you need a lot of procedures and safety measures. So historically in datacenters, all of the procedures are made for air cooling systems and water is restricted to its safest use if not banned from datacenters. Basically, it is impossible for water to ever come into contact with servers.
Up to now, no company was able to come with a good enough water cooling solution to change that matter of facts.
Summary
While we do have a few water-cooled racks (HP ones actually, don't know if they still make them) direct water cooling is a little old-school these days. Most new large data centres are being built with suction tunnels that you push your rack into, this then pulls the ambient air through and expels or captures-for-reuse the heat collected as it moves through equipment. This means no chilling at all and saves huge amounts of energy, complexity and maintenance, though it does limit systems to using very specific racks/sizes and requires spare rack space to be 'blanked' at the front.
Water is a universal solvent. Given enough time, it will eat through EVERYTHING.
Water cooling would also add a considerable (and costly) level of complexity to a data center which you allude to in your post.
Fire suppression systems in most data centers do not contain water for a few, very specific reasons, water damage can be greater than fire damage in a lot of cases and because data centers are tasked with uptime (with backup generators for power, etc.), this means that it's pretty hard to cut power to something (in the event of a fire) to squirt water on it.
So can you imagine if you have some type of complex water cooling system in your data center, that gives up the ghost in the event of a fire?? Yikes.
water should NOT be used for datacenter cooling but a mineral oil that mixes very well with electricity. see http://www.datacenterknowledge.com/archives/2011/04/12/green-revolutions-immersion-cooling-in-action/
even though the solution is new the technology is quite old, however making this type of change into existing datacenters it becomes very difficult, as you need to replace the existing racks with new type of racks ...
I think the short answer is that it adds considerable complexity. It's not so much an issue of space.
If you've got large quantities of water to deal with (piping, runoff, etc) you're adding a lot of risk... water and electricity don't mix well (or they mix too well, depending how you look at it).
The other issue with water is humidity. On a large scale, it's going to throw all your air conditioning systems for a loop. Then there's mineral buildup from evaporation, and no doubt tons of other things I didn't think of here.
The big disincentive for not using water in data centers is the fact that most water cooling systems are primitive. They all need quick connects to connect the server to the water source in the rack and those are a source of failure, especially as you may have thousands of them in a DC. They also make the servers more difficult to service and in most cases you still need fans. So you are adding to complexity.
On the human side, most facilities managers resist change. They are very skilled with air cooling and a move to liquid would make those skills obsolete. Further every facilities OEM will resist change as it would imply a complete product line redo.
Change will only come with a) better liquid cooling designs and b) legislation to force change
They do, but you need custom engineered components, OVH (one of the biggest datacenter company in the world) are using water-cooling for more than 10 years.
Check out this link where you can see their racks : http://www.youtube.com/watch?v=wrrZxmfevoE
The main problem for classic companies is that you need to do some R&D to use such technology.
Water cooled dater centres are very efficient and cost savings in energy provided you have purified water . however the dangers are more if they are in close contact. 1) moisture/ humidity levels
2) water aginst Electricity.
Water may actually not be the best fluid to use. As pointed out, it will dissolve every/any -thing over time. Surely water has a good use in cooling applications, but allaround is not the best. However mineral oil may also come in play, it is also not the best option to choose.
Special heat transfer oils are available that are non-corrosive - unlike water - and were specifically designed to be used as a heat transfer fluid. Paratherm makes a wide variety of these already.
The problem would be hooking stuff up to a closed loop heat exchanger and we are talking about large numbers.
The solution is already made, but not used in electronics environments and originates from farm machinery. To name it, hydraulics. The quick snap hose endings are leak proof, if for any reason they are disconnected they also close them self of both male and female end. at the very worse one would have no more than 1-2 small droplets upon disconnecting.
So we can eliminate that part. Designing proper copper parts that fit every single chip/circuitry that needs to be cooled is however a demanding task. As in case of liquid cooling every single part that needs to get rid of excess heat needs to be covered. It would take a relatively high pressure pump, presure sensors and reductors to make sure every rack has proper amount of liquid circulating and to prevent a failure. Electronic shut off valves would also be needed. This is nothing new as these parts are already made, even if for different intentions in the first place. Many small fans have the advantage of redundancy, so multiple pump units would be desired to prevent chances of a single point failure.
Apart from that, if it is a real closed loop cycle, then moving a low viscosity heat transfer fluid rather than a huge amount of air would naturally pay itself off.
Actually it would have multiple ways to do so. First of all airconditioning costs and fan running costs would be reduced. Never underestimate those costs. Even a small fan can take a few watts of power and fans do fail after a time. A hydraulic pump can run - considering the low pressure involved in this application - literally for years 24/7, substituting huge number of fans. Next up, server grade chips are able to withstand abuse, and can run at very high temperatures compared to desktop stuff. Even so keep them cooler and the lifespan expected will be longer which is never to be understimated given the price of these things. Air filtration to prevent dust and moisture would not be needed anymore.
These factor by far outweight the drawbacks of this kind of cooling technology. However, the initial investment is higher. Surely the solution can provide higher density server setups, but at the moment the investment is simply not considered by datacenters. Re-building an existing cooling solution would take time, and time is money. Servicing would be also very easy as bulky heatsinks would simply not be required, nor would be fans. Reduced number of potential failure points (every single fan is one of them) is something to keep in mind, also redundant pumps can kick in without any interaction from operators. Also fans do make heat themselves too. Consider a unit with 20 fans inside each just yielding no more than 5 watts. The end result would be another 100 watt of heat to get rid of somehow. Pumps and the driving motors would also make heat, but not inside a rack unit. Rather separated and isolated from the target system. In case of a short circuit say a powersupply active element shorts, this kind of liquid cooling can actually move enough heat and therefore reduce the likelyhood of fire spreading. Moving fresh air near a fire is not the best idea. Also plastic parts melt and plastic parts are flamable. Heat transfer fluid will happily operate at temperatures where fans would melt away potentially giving opportunity for another source of short circuit.
So would be liquid cooling be dangerous? I think from a safety point of view heaps of small fans are far more dangerous. From lifespan point of view liquid cooling is by far more prefered in my opinion. The only drawbacks are staff training and initial investments. Apart from that it is a far more viable solution that pays out well even in the mid run.