How we built the engineering infrastructure for the Fiztekh data center

    The customer of this project was the Moscow Institute of MIPT. It is very honorable to carry out a large and responsible project for the legendary PhysTech, but also serious responsibility.

    The work was ahead in the data center of the Institute, located in Dolgoprudny, Moscow Region. This is the heart of all MIPT information systems. Here, computing power is collected, which is used both for scientific and educational work (modeling, calculations) and for official purposes (mail and communications, accounting, etc.)




    An object


    Over time, the capacity of the data center was not enough. In addition, many faculties and departments had their own servers, which they supported on their own. The Institute management decided to consolidate computing power in a modernized and expanded data center, designed to install more powerful (and therefore more energy-intensive) IT equipment. The new data center was supposed to meet modern requirements in terms of power, reliability and fault tolerance.

    Our task was to prepare systems related to engineering infrastructure, namely:
    • power distribution system and uninterrupted power supply;
    • air conditioning system;
    • structured cabling system;
    • automated dispatch control system for all this economy.

    The technical solution proposed by us was recognized as the best among other proposals, we won the competition, it was possible to get down to business.

    General construction training


    I had to start with the dismantling and disposal of obsolete equipment, and this was almost the entire old air conditioning, power supply and SCS system. Of the old equipment, only four air conditioners remained.

    On the roof of the building, we dismantled two cooling towers that were used for cooling earlier. Then they designed, manufactured and installed metal structures on the roof to install new external units of the cooling system.

    Then we prepared the machine room itself and the room for the distribution unit (tanks, pumps, heat exchangers) for installation of new equipment.

    To protect equipment from leaks in the event of plumbing and heating accidents in the rooms located above, we provided a system for draining water from the space above the stretch ceiling along the slope of the plane of the stretch ceiling and subsequent drainage.



    Uninterruptible power supply and power distribution


    The new data center is designed for consumption of 180 kW. Power supply of computing and engineering equipment is performed separately. Computing equipment (16 server cabinets and 2 patch cabinets) accounts for 141 kW.

    For computing equipment, we have implemented a 2N (N + N) power backup level. Two modular UPSs manufactured by APC by Schneider Electric Symmetra PX 160 kW are involved here.

    Redundancy level for engineering support (here the main consumers are chiller circulation pumps and water circuit pumps) - N + 1. Power is supplied by a modular UPS manufactured by APC by Schneider Electric MGE Galaxy 3500 20 kW.

    The battery life of uninterruptible power supplies in the event of a power failure is at least 15 minutes, this is enough with a margin to start and exit the backup power source to the load.

    The entire uninterruptible power supply system is designed so that it is possible to service and upgrade on the go, without putting the entire complex out of operation.





    Air conditioning and ventilation


    Air temperature 20-25 ° C, relative humidity 40-65% - this is the microclimate that must be constantly maintained in the machine room and in the UPS room. This is necessary to protect the equipment not only from overheating, but also from failure due to condensation or static discharge.

    They decided to make a new air conditioning system bypass. Water is used as a coolant in the internal circuit, and a 40% ethylene glycol solution circulates in the external circuit. Such a scheme eliminates two troubles: freezing the coolant in pipelines outside the building and using hazardous ethylene glycol within the machine room.

    Let's start with the inner contour. It consists of two subsystems:
    • inter-row conditioning system for server racks,
    • UPS air conditioning system.

    Two rows of 18 server racks are installed in the engine room, their backs facing each other. Between them, we built a “hot corridor”, isolated from the external environment by doors and panels. 8 APC InRow RC inter-row air conditioners installed in the “hot corridor” take heat from this corridor and blow the cooled air into the external room, outside the corridor. Here, under static pressure, it moves to the front of the uprights and again runs through the uprights.

    Two UPS Carrier fan coil units were installed in the UPS room, which supply cold air and take hot air. The third fan coil is in reserve.

    The external contour of the air conditioning system is provided by the chiller. Two chillers manufactured by Uniflair by Schneider Electric (one main, the other standby) with a cooling capacity of 185 kW each, were specially equipped by the manufacturer for this project and installed on the roof on specially prepared metal structures.

    At an air temperature of +5 ̊С and lower, chillers switch to free cooling mode: the heat carrier is cooled by external air, which reduces energy consumption.

    To ensure the operation of air conditioners at low temperatures, winter start-up and heating of drainage holes are provided when condensate is drained outside the building.

    Decontamination devices for the hydraulic circuit are installed. Shutoff valves are installed in all places where it is assumed that the system components are hydraulically disconnected from the network for maintenance and repair.





    Structured cabling system


    Under the newly installed cabinets, a new structured cabling system for transmitting digital and analog data was created, consisting of copper and optical parts. Its architecture and performance parameters comply with a number of international standards ANSI and Russian GOST R 53346-2008.

    The copper subsystem is built on Category 6A F / FTP cables. One 24-port patch panel with organizers is installed in each server cabinet; 24 F / FTP cables are routed from each new cabinet to each cross cabinet. The subsystem is based on the Huber + Suhner modular cable system of the LiSA Solutions series, sales of which began only at the end of 2013. This is the first installation of the system in Russia!

    Optical subsystem. Each new cabinet is connected to the main optical crossovers using two pre-terminated 12-fiber multimode cables. Each new cabinet has an optical cassette. Fiber optic equipment is also manufactured by Huber + Suhner.





    Automated Dispatch System


    The system is designed for the operator’s workplace. Allows you to monitor engineering systems and remotely control them in real time. It has the ability to alert personnel in case of emergency (for example, leaks), maintains an archive of technological information and allows you to generate reports. The system is based on Delta Controls modular controllers and has a three-tier architecture.

    Sensors and actuators form the lower level of the system. Here, primary information is collected from sensors (temperature, pressure, flow, electrical parameters) and the equipment is directly controlled (valves, gate valves, relays).

    At the middle level, controllers work, receiving information from the lower level, transmitting it to the upper level. The controllers also generate a control signal for the actuators in accordance with the program laid down.

    The upper level of the system is responsible for the final processing of data and interaction with users. At this level, aggregation and processing of all data takes place, registration of all events in the system, including user actions. The upper level includes server hardware and software for interrogation, storage and visualization of data (SCADA). The user interface shows the parameters of equipment and their controls in a clear, intuitive way. The visualization system is organized using ORCAview software.



    Warranty and after-sales service


    We have committed ourselves to a five-year warranty and after-sales service. Moreover, the agreement concerns not only the new equipment installed within the framework of the project, but also the four APC air conditioners that the customer had previously.

    Warranty service regulations include the visit of a service engineer for diagnostics, the supply of components and materials, and repair and restoration work. After-sales service includes scheduled and preventive maintenance at least twice a year, including the necessary components and consumables.



    Results and Results


    In parallel with the work on the design, delivery of equipment and installation, the development of detailed documentation was compiled in accordance with the requirements of GOST. Here the system, rules and norms of its operation were described in detail.

    Among the prepared documents was the “Program and Test Methodology”, on the basis of which the system was tested. The test took place in conditions as close to combat as possible. For example, to test the cooling system in the absence of real servers, a heat gun was brought to the machine room. All tests were successful and allowed to test the performance of all subsystems and make sure that they fully comply with the requirements.

    After the transfer of the data center, the customer filled it with server, communication and other necessary equipment and put it into commercial operation. Work in real conditions showed that the data center meets all the necessary requirements in terms of power, reliability and fault tolerance.

    The best energy-saving technologies used in the project ensure high energy efficiency of the data center:
    • creation of an isolated "hot" corridor - the most efficient cooling system for server equipment of all known in terms of price / quality ratio.
    • Uniflair by Schneider Electric chillers cooling system with free cooling function saves up to 30% of annual energy consumption.

    According to preliminary estimates, the Power Usage Effectiveness (PUE) indicator of the MIPT data center according to the results of modernization is 1.5, which indicates a high level of energy efficiency of the site.

    The MIPT data center complies with the reliability level (TIER 3) according to the international standard TIA-942 for the infrastructure of Data Processing Centers, and the uptime indicator is 99.982%.



    Softline team

    Also popular now: