Data Center for Technopark: from “concrete” to Tier Facility certification
During the construction of the technical support center of one of the largest Russian technology parks, I was responsible for the engineering infrastructure. The seventh object in Russia was certified by the Tier Facility of the reputable international institute Uptime Institute. About what it cost us, what solutions we used and how the data center was tested for compliance with international standards, I will tell in this photo post.
The technical support center (CTC) of the Zhiguli Valley technopark is a two-story building. It includes six engine rooms with a total area of 843 square meters. m and holds 326 racks with an average power of 7 kW each (maximum load on the rack - 20 kW). And we, the specialists of LANIT-Integration , first saw him at the stage of the concrete frame. At that time, it had no communications. We had to create the entire engineering and network infrastructure, as well as part of the computing infrastructure for the automation of engineering systems.
To begin with, I will talk a little about the project as a whole, and then I will show the whole "stuffing".
So, in the technical support center of the Zhigulevskaya Valley, it was necessary to install the servers, storage and other equipment needed by residents of the Samara Region Innovation Center. Of course, such equipment requires special operating conditions: energy supply, thermal conditions, humidity level, etc., therefore it was economically feasible to install the equipment in one place.
We examined the building, examined the documentation available to the customer, and proceeded to the development of project documentation (which was to undergo a state examination). The biggest difficulty was that it took us only three months to develop the design and working documentation. In parallel with the working documentation, I had to prepare part of the documentation in English for an examination of the international institute Uptime Institute.
Since the density of engineering systems in the data center is very high, and the deadlines are tight, I had to plan in detail the work, supply of materials and equipment.
Again, because of the “burning” timelines, they decided to use high-tech prefabricated modular structures - with their help they avoided additional construction work, did not make refractory partitions and powerful waterproofing. This solution protects the room from water, fire, dust and other factors dangerous to IT equipment, while the assembly of a modular design takes only 1-2 weeks.
After the examination of the project, they began to develop working documentation and order equipment. By the way, about equipment: since the project fell on a crisis year and there were difficulties with financing, we had to adjust all the work to the financial plan and, above all, buy the rest of the expensive bulky equipment - the price was fixed by the manufacturer.
Most of the working specialties we recruited in the region (up to 150 people), as for narrow specialists and engineers, they had to be sent on a long business trip from Moscow.
In the end, everything worked out. The story of the main test - certification of the Uptime Institute - I saved at the very end. Now let's see how everything is arranged with us.
To get to the data center, you must have an RFID pass and pass identification at the entrance to the data center. The building also has all the doors under control, and access to certain rooms is valid only after double identification of the employee.
For the data network, we used HP FlexFabric and Juniper QFabric switches. Panduit intelligent SCS with the PanView iQ physical infrastructure management system allows you to remotely monitor and automatically document the status of the physical layer of the network, as well as carry out intelligent control of the switching field.
It is possible to control the correct connection of equipment or abnormal connection to SCS ports. Pre-terminated cables allow, if necessary, changing the configuration of SCS in those areas where it is required.
In the own transformer substation of the central heating station, 4 independent transformers are installed, which feed the electric distribution and distribution devices (VRU1-4) of the central heating station.
The total allocated power at the data center is 4.7 MW.
8 independent beams depart from the transformer substation in the central switchgear of the central heating station and, through the guaranteed power supply system based on diesel-rotor uninterruptible power supplies (DDIBP), they feed 8 central switchgear, a central switchgear of uninterruptible power supply, and 4 auxiliary switchgear of the guaranteed power supply system. The power system is built according to the 2N scheme, the cables go from two different sides of the data center building into two spaced electrical switchboards.
Scheme 2N implements power supply redundancy. Any supply beam for maintenance can be taken out of service, while the data center will be fully operational.
For uninterrupted and guaranteed power supply, we have installed diesel-dynamic uninterruptible power supplies Piller. The power of each 1MW uninterrupted and 650 kW guaranteed power supply, which allows to ensure the operation of the data center in full.
In the event of a power outage, the rotor rotates by inertia and the electrical installation continues to generate current for a few more seconds. This short period of time is enough to start the diesel generator and reach the operating speed. Thus, the operation of the equipment does not stop even for a second.
The construction scheme of N + 1. With such power generated electricity, the diesel engine has a high fuel consumption of more than 350 liters per hour, which required an additional external fuel storage capacity of more than 100 tons of diesel fuel.
In the data center building are two spaced electrical switchboards. Each has 2 ASUs of uninterruptible power supply system and 2 ASUs of guaranteed power system.
The dispatch system monitors the status of all circuit breakers. Technological metering devices are also installed, which allows you to control electricity consumption in each direction of power supply. To ensure the smooth operation of auxiliary systems, a cabinet with a static electronic bypass is installed.
The distribution system is also controlled by a dispatch system, and each line is equipped with a process meter.
Distribution cabinets are located in small halls, and lines go to each server cabinet from which two power distribution units are installed.
In large halls, a busbar trunking was used to distribute power supply. In outlet boxes, each machine is also controlled by a dispatch system, which allows you to constantly get a complete picture of the operation of the data center power supply system.
Directly in server cabinets for power distribution, power distribution units are installed.
On the ground floor, in the equipment and machine rooms of the central heating station, the equipment is cooled using Emerson in-line precision air conditioners (previously purchased by the customer) and using water chillers and drycoolers to prepare chilled water.
To cool the halls of the second floor (with the main load), we used air conditioners with an air-to-air recuperator.
AST Modular's innovative cooling system includes 44 Natural Free Cooling outdoor cooling modules in four engine rooms on the first floor.
The system consists of two open circuits. In one of them, air circulates from the machine room, and in the other, cool air from the street is supplied through the technological ventilation system. Both flows pass through a recuperative heat exchanger in which heat is exchanged between the external and internal air. Thus, the equipment can be cooled with outside air up to 90% of the time a year, which allows to reduce the cost of traditional active cooling by using water chillers and drycoolers by 20-30%. In summer, air is cooled by chilled water from chillers.
By the beginning of the project, we already had experience in creating such cooling systems: we used Natural Fresh Aircooling in the high-tech modular data center VimpelCom in Yaroslavl. But each new project is fraught with complexity.
In order to link the operation of MNFC air conditioners with the air supply units (which prepare the street air, and then supply it to the technical corridor for MNFC), we had to work out the dependence of the volumes of supplied air and the exhaust air units for MNFC air conditioners. This is due to the fact that the volume of air that enters the technical corridor must not exceed a certain limit (so as not to strangle the air conditioning fans) and at the same time be sufficient (so that the air in the corridor does not turn out to be rarefied).
Chiller machines, pumping groups, heat exchangers, etc. were placed in the premises of the refrigeration center (HC).
Water chillers and drycoolers are used to supply chilled water to Natural Free Cooling units on the second floor during the hot season and to air conditioners on the first floor all the time. In winter, water is cooled through a heat exchanger in the cold water cooler and cooling towers on the roof, which also allows not to turn on the chillers and save energy.
According to the project, we needed to install a large amount of equipment on the roof of the central heating station, so we had to practically build the third floor in the form of a hundred-ton frame and install about 60 tons of equipment on it: cooling towers, whose length reached 14 m, and powerful ventilation plants above human height.
Of course, we sweated with the cooling towers: try to raise such a whopper on the roof! Due to the large dimensions of the equipment, I had to cook a special beam for unloading, and also use a crane with a lifting capacity of 160 tons with a large boom, and even such a crane sagged in places.
For fire safety, fire alarm systems, early fire detection (aspiration system - sensors analyzing the composition of the air), automatic gas fire extinguishing systems and a gas removal system that allows you to remove gas residues after working out the fire extinguishing system are deployed at the site.
The fire warning system works as follows: fire sensors in the machine rooms, equipment rooms, and auxiliary rooms give appropriate signals to the central automatic fire alarm station, which, in turn, launches an automatic gas fire extinguishing system, fire automatics, and a warning system, depending on zones of fire.
As the main equipment was used ESMI (Schneider Electric). It closed all fire safety tasks and is displayed on the operator’s workstation in the control room.
In most data centers, only local automation with a minimum set of monitoring functions is installed, and we have all the subsystems integrated into a single technical solution with a common monitoring system. The control system monitors the status of more than 10 systems and displays more than 20 thousand parameters. Do you want to know the temperature and humidity in the machine rooms or are you interested in the power consumption of each IT rack? No problem.
As part of the monitoring and dispatching system, we used both the SCADA system and the monitoring and management system of the data center. SCADA-system (Supervisory Control And Data Acquisition - supervisory control and data collection) allows you to view data and control the automation of installed equipment: chiller, air conditioner, diesel-rotary UPS, ventilation, pumps, etc.
The data center resource accounting system allows you to keep a complete record of all equipment installed in racks. It displays alarm messages in real time, equipment status data with visualization on a detailed layout. Thanks to detail, we can see any device located in the data center.
The system allows you to simulate emergency situations and gives recommendations for solving problems, and also allows you to plan and optimize the operation of technical resources of the data center.
It also monitors and controls ventilation systems (including monitoring the temperature of the air they supply, so they participate in the technological process of cooling IT equipment). All parameters for air conditioners, ventilation units and cooling systems are taken into account.
The system monitors and controls the status of equipment (and various valves). In addition, it additionally protects equipment from critical conditions.
The system monitors the status of all the main protection devices in the control cabinets, as well as the power consumption on these lines.
The system allows you to keep track of installed equipment in IT racks, as well as store information about its affiliation. Data from fire-fighting systems and building security systems flow here.
The building is managed using solutions built on Schneider Electric equipment. An automated dispatching system, monitoring and management of engineering systems provides all the necessary information to the dispatcher at his workplace.
The systems are interconnected through the technological segment of the data network. The software is installed on virtual machines, data backup is provided.
To ensure security, video surveillance and access control, a centralized Schneider Electric solution was also installed in the data center. It provides the necessary information to the data center dispatching system.
All the equipment that I talked about is assembled in a fault-tolerant configuration and placed in modular Smart Shelter rooms, which provide reliable protection of the central heating system from fire, moisture, vibration and other external influences. This decision allowed to shorten the duration of the project. On average, it took two weeks to assemble one room.
In order to pass the exam for the compliance of the data center with the Tier III Design and Tier III Facility standards, we had to complete all the documentation in accordance with the requirements of the Uptime Institute, pass its certification and install and run all the systems according to this documentation. In the final, we passed the test of data center systems for compliance with Tier III Facility. And this is about 50 tests!
The preparation for the tests was not easy: if the certification of the documentation is understandable, then it was extremely difficult to find information on how the certification of the facility is certified (at that time, the Zhigulevskaya Valley data center turned out to be the seventh certified data center in Russia).
A lot of effort went into the internal audit (load tests were previously conducted for the customer). For two weeks, we checked everything one after another in accordance with the project documentation and tests received from the Uptime Institute, and still, of course, were nervous during the certification.
When the specialists of the Uptime Institute arrived at the data center with an inspection and began to inspect the halls, I made an important conclusion for myself for the future - there should be one leader in the team who will take over the communication with the “examiners”. About who will become such a person, you need to think at the stage of team formation. We tried to do just that, and yet in the process of testing some of our employees tried to take the initiative. Representatives of the Uptime Institute, noting third-party discussions, immediately asked additional questions.
The first day was a fact-finding: the inspector examined the data center for compliance with the design documentation.
The next day, for the key testing of the Uptime Institute, it was necessary to give the full load of the data center - both thermal and in terms of energy consumption. The load was imitated by specially installed heat guns with a capacity of 30-100 kW.
For some tests, the equipment went into critical mode, and there were concerns that the systems would not stand up. To protect the equipment from possible consequences (as they say, God saves the safe), we invited engineers from the equipment manufacturers.
Everything from the control room was monitored in real time by one of the Uptime Institute inspectors, Fred Dickerman. An automated dispatch control system and many sensors made it possible to monitor the current state of equipment and temperature changes in the machine rooms.
The certification results showed that we initially laid down the right technical solutions, everything was correctly installed and launched. According to experts, this data center was the only one in Russia that passed the Uptime Institute audit without comment.
And finally - a video about our project in the Zhiguli Valley
The technical support center (CTC) of the Zhiguli Valley technopark is a two-story building. It includes six engine rooms with a total area of 843 square meters. m and holds 326 racks with an average power of 7 kW each (maximum load on the rack - 20 kW). And we, the specialists of LANIT-Integration , first saw him at the stage of the concrete frame. At that time, it had no communications. We had to create the entire engineering and network infrastructure, as well as part of the computing infrastructure for the automation of engineering systems.
To begin with, I will talk a little about the project as a whole, and then I will show the whole "stuffing".
So, in the technical support center of the Zhigulevskaya Valley, it was necessary to install the servers, storage and other equipment needed by residents of the Samara Region Innovation Center. Of course, such equipment requires special operating conditions: energy supply, thermal conditions, humidity level, etc., therefore it was economically feasible to install the equipment in one place.
We examined the building, examined the documentation available to the customer, and proceeded to the development of project documentation (which was to undergo a state examination). The biggest difficulty was that it took us only three months to develop the design and working documentation. In parallel with the working documentation, I had to prepare part of the documentation in English for an examination of the international institute Uptime Institute.
As part of the project, we had to:
- make data centers energy efficient;
- build in a short time;
- pass the Uptime Institute exam and get a Tier III certificate.
Since the density of engineering systems in the data center is very high, and the deadlines are tight, I had to plan in detail the work, supply of materials and equipment.
Again, because of the “burning” timelines, they decided to use high-tech prefabricated modular structures - with their help they avoided additional construction work, did not make refractory partitions and powerful waterproofing. This solution protects the room from water, fire, dust and other factors dangerous to IT equipment, while the assembly of a modular design takes only 1-2 weeks.
After the examination of the project, they began to develop working documentation and order equipment. By the way, about equipment: since the project fell on a crisis year and there were difficulties with financing, we had to adjust all the work to the financial plan and, above all, buy the rest of the expensive bulky equipment - the price was fixed by the manufacturer.
Changes in equipment delivery plans required even more careful planning. To organize the work of 100-150 people at the facility without downtime, the project team paid a lot of attention to change management and risk management procedures. The stake was made on communication: any information about problems and delays was transmitted almost immediately along the whole chain from the team leaders to the project manager and engineers at the company's head office.The project team and procurement managers worked in Moscow, and the main team worked directly in the technology park. Engineers more than once had to go on a business trip to the facility for clarification and adjustment of design decisions, and meanwhile, from our Moscow office to the technopark, almost 1000 km along the highway. Directly on the spot, we organized the construction headquarters - meetings were regularly held there with the participation of the customer and work managers, after which all updated current information was transferred to the engineering team.
Most of the working specialties we recruited in the region (up to 150 people), as for narrow specialists and engineers, they had to be sent on a long business trip from Moscow.
In the end, everything worked out. The story of the main test - certification of the Uptime Institute - I saved at the very end. Now let's see how everything is arranged with us.
To get to the data center, you must have an RFID pass and pass identification at the entrance to the data center. The building also has all the doors under control, and access to certain rooms is valid only after double identification of the employee.
Structured Cabling Systems
For the data network, we used HP FlexFabric and Juniper QFabric switches. Panduit intelligent SCS with the PanView iQ physical infrastructure management system allows you to remotely monitor and automatically document the status of the physical layer of the network, as well as carry out intelligent control of the switching field.
It is possible to control the correct connection of equipment or abnormal connection to SCS ports. Pre-terminated cables allow, if necessary, changing the configuration of SCS in those areas where it is required.
Power Supply Data Center
In the own transformer substation of the central heating station, 4 independent transformers are installed, which feed the electric distribution and distribution devices (VRU1-4) of the central heating station.
The total allocated power at the data center is 4.7 MW.
8 independent beams depart from the transformer substation in the central switchgear of the central heating station and, through the guaranteed power supply system based on diesel-rotor uninterruptible power supplies (DDIBP), they feed 8 central switchgear, a central switchgear of uninterruptible power supply, and 4 auxiliary switchgear of the guaranteed power supply system. The power system is built according to the 2N scheme, the cables go from two different sides of the data center building into two spaced electrical switchboards.
Scheme 2N implements power supply redundancy. Any supply beam for maintenance can be taken out of service, while the data center will be fully operational.
For uninterrupted and guaranteed power supply, we have installed diesel-dynamic uninterruptible power supplies Piller. The power of each 1MW uninterrupted and 650 kW guaranteed power supply, which allows to ensure the operation of the data center in full.
In the event of a power outage, the rotor rotates by inertia and the electrical installation continues to generate current for a few more seconds. This short period of time is enough to start the diesel generator and reach the operating speed. Thus, the operation of the equipment does not stop even for a second.
The construction scheme of N + 1. With such power generated electricity, the diesel engine has a high fuel consumption of more than 350 liters per hour, which required an additional external fuel storage capacity of more than 100 tons of diesel fuel.
In the data center building are two spaced electrical switchboards. Each has 2 ASUs of uninterruptible power supply system and 2 ASUs of guaranteed power system.
The dispatch system monitors the status of all circuit breakers. Technological metering devices are also installed, which allows you to control electricity consumption in each direction of power supply. To ensure the smooth operation of auxiliary systems, a cabinet with a static electronic bypass is installed.
The distribution system is also controlled by a dispatch system, and each line is equipped with a process meter.
Distribution cabinets are located in small halls, and lines go to each server cabinet from which two power distribution units are installed.
In large halls, a busbar trunking was used to distribute power supply. In outlet boxes, each machine is also controlled by a dispatch system, which allows you to constantly get a complete picture of the operation of the data center power supply system.
Directly in server cabinets for power distribution, power distribution units are installed.
Hall Cooling
On the ground floor, in the equipment and machine rooms of the central heating station, the equipment is cooled using Emerson in-line precision air conditioners (previously purchased by the customer) and using water chillers and drycoolers to prepare chilled water.
To cool the halls of the second floor (with the main load), we used air conditioners with an air-to-air recuperator.
AST Modular's innovative cooling system includes 44 Natural Free Cooling outdoor cooling modules in four engine rooms on the first floor.
The system consists of two open circuits. In one of them, air circulates from the machine room, and in the other, cool air from the street is supplied through the technological ventilation system. Both flows pass through a recuperative heat exchanger in which heat is exchanged between the external and internal air. Thus, the equipment can be cooled with outside air up to 90% of the time a year, which allows to reduce the cost of traditional active cooling by using water chillers and drycoolers by 20-30%. In summer, air is cooled by chilled water from chillers.
By the beginning of the project, we already had experience in creating such cooling systems: we used Natural Fresh Aircooling in the high-tech modular data center VimpelCom in Yaroslavl. But each new project is fraught with complexity.
In order to link the operation of MNFC air conditioners with the air supply units (which prepare the street air, and then supply it to the technical corridor for MNFC), we had to work out the dependence of the volumes of supplied air and the exhaust air units for MNFC air conditioners. This is due to the fact that the volume of air that enters the technical corridor must not exceed a certain limit (so as not to strangle the air conditioning fans) and at the same time be sufficient (so that the air in the corridor does not turn out to be rarefied).
Ground floor refrigeration center and rooftop equipment
Chiller machines, pumping groups, heat exchangers, etc. were placed in the premises of the refrigeration center (HC).
Water chillers and drycoolers are used to supply chilled water to Natural Free Cooling units on the second floor during the hot season and to air conditioners on the first floor all the time. In winter, water is cooled through a heat exchanger in the cold water cooler and cooling towers on the roof, which also allows not to turn on the chillers and save energy.
According to the project, we needed to install a large amount of equipment on the roof of the central heating station, so we had to practically build the third floor in the form of a hundred-ton frame and install about 60 tons of equipment on it: cooling towers, whose length reached 14 m, and powerful ventilation plants above human height.
Of course, we sweated with the cooling towers: try to raise such a whopper on the roof! Due to the large dimensions of the equipment, I had to cook a special beam for unloading, and also use a crane with a lifting capacity of 160 tons with a large boom, and even such a crane sagged in places.
Fire safety
For fire safety, fire alarm systems, early fire detection (aspiration system - sensors analyzing the composition of the air), automatic gas fire extinguishing systems and a gas removal system that allows you to remove gas residues after working out the fire extinguishing system are deployed at the site.
The fire warning system works as follows: fire sensors in the machine rooms, equipment rooms, and auxiliary rooms give appropriate signals to the central automatic fire alarm station, which, in turn, launches an automatic gas fire extinguishing system, fire automatics, and a warning system, depending on zones of fire.
As the main equipment was used ESMI (Schneider Electric). It closed all fire safety tasks and is displayed on the operator’s workstation in the control room.
Automation and data center dispatching system
In most data centers, only local automation with a minimum set of monitoring functions is installed, and we have all the subsystems integrated into a single technical solution with a common monitoring system. The control system monitors the status of more than 10 systems and displays more than 20 thousand parameters. Do you want to know the temperature and humidity in the machine rooms or are you interested in the power consumption of each IT rack? No problem.
As part of the monitoring and dispatching system, we used both the SCADA system and the monitoring and management system of the data center. SCADA-system (Supervisory Control And Data Acquisition - supervisory control and data collection) allows you to view data and control the automation of installed equipment: chiller, air conditioner, diesel-rotary UPS, ventilation, pumps, etc.
The data center resource accounting system allows you to keep a complete record of all equipment installed in racks. It displays alarm messages in real time, equipment status data with visualization on a detailed layout. Thanks to detail, we can see any device located in the data center.
The system allows you to simulate emergency situations and gives recommendations for solving problems, and also allows you to plan and optimize the operation of technical resources of the data center.
It also monitors and controls ventilation systems (including monitoring the temperature of the air they supply, so they participate in the technological process of cooling IT equipment). All parameters for air conditioners, ventilation units and cooling systems are taken into account.
The system monitors and controls the status of equipment (and various valves). In addition, it additionally protects equipment from critical conditions.
The system monitors the status of all the main protection devices in the control cabinets, as well as the power consumption on these lines.
The system allows you to keep track of installed equipment in IT racks, as well as store information about its affiliation. Data from fire-fighting systems and building security systems flow here.
The building is managed using solutions built on Schneider Electric equipment. An automated dispatching system, monitoring and management of engineering systems provides all the necessary information to the dispatcher at his workplace.
The systems are interconnected through the technological segment of the data network. The software is installed on virtual machines, data backup is provided.
To ensure security, video surveillance and access control, a centralized Schneider Electric solution was also installed in the data center. It provides the necessary information to the data center dispatching system.
All the equipment that I talked about is assembled in a fault-tolerant configuration and placed in modular Smart Shelter rooms, which provide reliable protection of the central heating system from fire, moisture, vibration and other external influences. This decision allowed to shorten the duration of the project. On average, it took two weeks to assemble one room.
Pass fire, water and ... Uptime Institute exam
In order to pass the exam for the compliance of the data center with the Tier III Design and Tier III Facility standards, we had to complete all the documentation in accordance with the requirements of the Uptime Institute, pass its certification and install and run all the systems according to this documentation. In the final, we passed the test of data center systems for compliance with Tier III Facility. And this is about 50 tests!
The preparation for the tests was not easy: if the certification of the documentation is understandable, then it was extremely difficult to find information on how the certification of the facility is certified (at that time, the Zhigulevskaya Valley data center turned out to be the seventh certified data center in Russia).
A lot of effort went into the internal audit (load tests were previously conducted for the customer). For two weeks, we checked everything one after another in accordance with the project documentation and tests received from the Uptime Institute, and still, of course, were nervous during the certification.
When the specialists of the Uptime Institute arrived at the data center with an inspection and began to inspect the halls, I made an important conclusion for myself for the future - there should be one leader in the team who will take over the communication with the “examiners”. About who will become such a person, you need to think at the stage of team formation. We tried to do just that, and yet in the process of testing some of our employees tried to take the initiative. Representatives of the Uptime Institute, noting third-party discussions, immediately asked additional questions.
The first day was a fact-finding: the inspector examined the data center for compliance with the design documentation.
The next day, for the key testing of the Uptime Institute, it was necessary to give the full load of the data center - both thermal and in terms of energy consumption. The load was imitated by specially installed heat guns with a capacity of 30-100 kW.
For some tests, the equipment went into critical mode, and there were concerns that the systems would not stand up. To protect the equipment from possible consequences (as they say, God saves the safe), we invited engineers from the equipment manufacturers.
Everything from the control room was monitored in real time by one of the Uptime Institute inspectors, Fred Dickerman. An automated dispatch control system and many sensors made it possible to monitor the current state of equipment and temperature changes in the machine rooms.
The certification results showed that we initially laid down the right technical solutions, everything was correctly installed and launched. According to experts, this data center was the only one in Russia that passed the Uptime Institute audit without comment.
Want to be part of our team? We have vacancies for you!
And finally - a video about our project in the Zhiguli Valley