Server-side cooling. Where to spend money profitably

4 things inspired me to write this publication:
1. Publication “Why is it important to maintain temperature conditions in the server room? As usual, server-side cooling is arranged , in which the author tried to carry out an honorable and very difficult mission to clarify the need to cool the server.
2. Errors deducted in this post.
3. Own experience.
4. The number of informative articles tending to zero on the subject of the data center infrastructure (I mean server-side = small data center) on Habré. Although "Beeline" and the guys from "TsODy.rf" in this regard are great fellows.

So what will this post be about.

First of all, it will be a small excursion into the theory of server cooling. Secondly, I will try to deal with the main misconceptions in the planning of refrigeration. And in the third - analysis, where is it still worth investing money, and what can be abandoned.

Today, there are two global strategies in the cooling of data centers:

1. Free cooling. This is when the servers are cooled directly by external air with minimal preparation (usually this is basic filtering and heating in the winter).
2. Controlled cooling, let's call it that. This is when you prepare the air in terms of pollution, humidity, temperature and feed it into the server. This also includes various methods of indirect freecooling (using external air to cool the heat exchanger in which the air from the data center is located).

The advantages of the first strategy are obvious. These are low implementation costs, low maintenance costs, ridiculous electricity bills. The disadvantages are also understandable. Uncontrolled humidity and dustiness of air, which inevitably leads to failure of server components. This approach has its followers. Usually these are very large technology companies. Why is it good for them and bad for the rest? There are 3 reasons:

1. A network of fully redundant sites. If a failure occurs on one, the second will pick it up.
2. The need to be at the peak of technology. A server running with bad air will crash in about a year. Over the year, these companies will change the server fleet by a third. It makes no sense to them to preserve the iron, which in a year will go to the trash.
3. Volumes and electricity bills. Cooling is the most expensive item on electricity bills. A 1% reduction in cooling costs will save them several million dollars. What can we say about the reduction of 30-50%. And they are ready to endure some inconvenience.

The second strategy implies greater reliability and a long service life of the refrigerated equipment. The most traditional example is the banking industry. Well, all the other companies that do not change the server like gloves. The disadvantages of this strategy are price, price, price. Construction, maintenance, electricity.

It is clear that most companies are considering the option "as functionally and without frills". However, just not always easy. It happens simply and correctly, but it happens quite the opposite (I felt like a boxer).

Let's move on to more practical things. When talking about server cooling, they primarily mean temperature control. This is true, but not enough. The three pillars of proper cooling are temperature, air volume and humidity. The second tier is air flow control, that is, how to direct cold air to where the server will take it and how to take hot air from the server “ejection” and direct it to the air conditioner. And how to do this so that the hot and cold air does not mix.

With temperature, everything is simple. There are recommendations of the server manufacturer, there are recommendations of ASHRAE. The normal temperature for most server I consider 22-24 degrees.

If everyone remembers about the temperature, then practically none of those building the server room think about the air volume. Let's look at the technical parameters of the server. In addition to consumption, sizes, etc. there is a parameter usually measured in CFM (cubic feet per minute) - this is the volume of air pumped. That is, your server needs air of a certain temperature and in a certain volume. In a thick font with a caps “in a certain amount”. Here we immediately proceed to the possibility of using household split systems in the server room. Here's the thing: they won’t cope with the necessary volume. The fact is that the specific heat of a person is incomparably small compared to the server, and household air conditioners are designed specifically to create a comfortable climate for a person. Their small fans (like the front limbs of a tyrannosaurus) are not able to drive through themselves the amount of air needed to cool the server. As a result, we get the picture when the server drives air through itself, the air conditioner cannot take it, and the hot air mixes with the cold. You have probably been to the server room, where the air conditioner gives out +16 degrees, and in the room +28. I have been. Maybe your server room is just that?

Well, so as not to get up twice:

1. Household splits are designed for 8/5, and the server one is working 24/7. Split will develop its resource in a year and a half.
2. Splits do not know how to supply air of the right temperature to the server, they know how to throw air of the right temperature out of themselves, and what’s going to the server there doesn’t matter to them (they are such bastards).
3. Their intake and exhaust are too close, and this means that hot and cold air will inevitably mix (and here, see point 2).
4. It is very difficult to make the splits work in accordance with the temperature sensors (and here again, see point 2).
In general, do not use household splits. Do not. In the long run, a good precision air conditioner will come out cheaper than a split.

Regarding humidity control. There is one wrong message in the article mentioned at the beginning. Humidity needs to be controlled, it is certain. But just do not dry, but moisten the air. The fact is that the server room has a closed air exchange (at least it should). And the amount of moisture in the air at stage 0 (starting the server) there is within certain limits. During cooling, most moisture condenses on the air conditioner heat exchanger (the temperature difference is too high) and is discharged into the drain. The air becomes too dry, and this is static on the boards and a decrease in the heat capacity of the air. Therefore, a good waste of money would be to buy a productive humidifier and a water treatment system for it.

The moment associated with air flow control. In the vast majority of cases, the fan blocks in the cabinets are completely useless. They draw air from bottom to top, and its servers pull from front to back. What you need to do is to throw out the fan blocks from the budget and lay the plugs on empty units in the cabinet. Even though you beat up with boards, but close all the holes through which air from the back of the cabinet can get into the front. Passive air control methods in most cases work better than active ones. And they are cheaper.
Climate monitoring. A very important point. Without monitoring, you will never know what is not working as intended. It is necessary to monitor both temperature and humidity. Humidity can be monitored at the point farthest from the humidifier, since this indicator is the same for any point in the room. But the temperature must be monitored on the front door of the cabinet. If you do not use the distribution of cold air from under the raised floor, then just one sensor per cabinet. If you distribute air through a raised floor (it is clear that we are already using the right air conditioners), then the right strategy will be to monitor the air at different levels from the floor (for example, 0.5 m and 1.5 m). Well, it will not be out of place to mention that in the server room you should never, under any circumstances, put cabinets with glass / blind doors. Air must pass freely through the cabinet and server.

As a summary:
1. Do not use household splits - they do everything wrong.
2. Control humidity.
3. And air flow.
4. Install plugs on unused cabinet units.
5. Use cabinets with perforated front and rear doors. If you don’t have one, remove the doors altogether. Well, or a drill in your hands.
6. Correctly place the sensors of the monitoring system. We measure temperature on the front of the cabinet, humidity - in any part of the room.
7. Remove the heater from the server battery. They not only warm, but also water sometimes.
8. Remove the windows. Windows are heat influxes and the easiest way to the room, bypassing the armored door to the server room and five security posts.
9. Make normal hydro- and steam- and thermal insulation of the room.
10. Tools are secondary. There are a huge number of cooling and monitoring solutions. The main thing is to understand what is primary for you today, but there is a tool.
11. Accept the fact that today IT is not only about “patching kde under free bsd”, VM and DB, but also such remote things as energy, cooling, physical security and architecture.

Good luck in the field of building the right infrastructure.

Also popular now: