Big photo tour of the Moscow cloud 1cloud
This is a photo tour of the Moscow DataSpace data center, which houses our equipment. We show the data center security systems, server racks and the hardware itself.
Attention - under the cut a lot of photos.
We have been providing virtual infrastructure rental services since 2012 in 1cloud. Our project began work in St. Petersburg, but over time we expanded our geography. Today, our hardware is located in four data centers: Xelent , Dataspace , Ahost and beCloud . The first is in St. Petersburg, the second is in Moscow, the third is in Almaty (Kazakhstan), the fourth is in Minsk (Belarus).
All these data centers have approximately the same equipment. And then we will tell you how it is located in the Moscow data center.
Let's start the show from the outer perimeter of the DataSpace data center. The building is surrounded by a five-meter fine-mesh fence with rotating spikes on the top and anti-digging protection. Passages to the territory are closed with full-height turnstiles. Next to them are security posts.
To get "over the fence", you need to present a contactless key and pass identification at the checkpoint. If an employee has forgotten his ID (passport), he will have to go after him home.
The next "frontier" - armored airlock cabins in the reception area. In the booths are access card readers and palm print scanners.
Between the cabins you can see two Tier III signs. Dataspace is the first data center in Russia that has passed three stages of Tier III certification from the Uptime Institute.
In the photo above - a retina scanner. They are installed at the entrance to all rooms with critical equipment of the engineering infrastructure of the processing center. In addition to scanners, access is still controlled by electronic keys.
Also throughout the data center is video surveillance. Over the whole complex, more than a hundred video cameras have been installed, records from which are stored for at least five years.
The data center provides the ability to organize the allocated protected space (cell).
Our equipment is located in the usual racks DataSpace. We rent the rack entirely. We have a large amount of equipment, so only our specialists should have access to it.
One of the most noticeable nuances in the hall is grounding. The bus is copper and connected to an external ground loop .
According to the requirements of the data center, all equipment that requires a separate ground line for its normal operation, in accordance with GOST 12.1.030-81 (labor safety standards system) , is grounded . Work on connecting equipment to the power supply network and to the protective earthing tires is carried out exclusively by data center employees. All wires are marked.
The chaotic distribution of cables leads to loss of airflow, which can cause overheating of the equipment and disable it. To eliminate overheating (and meet the requirements of corporate quality standards of work performed) in DataSpace cables are laid in special trays. As a bonus, effective cable management helps reduce the energy consumption of air conditioning systems.
As standard, each rack receives two independent protected power supply beams. Power cables are laid under the raised floor, whose height is 60 cm. The raised floor can be transformed, change the location of the plates. This makes it possible, if necessary, to rearrange racks with equipment.
In the machine rooms of the data center organized cold corridors. Of these, the equipment receives the air for cooling and blows it towards the hot corridor. The cold corridor is separated from the hot to avoid mixing air and, again, reduce the cost of air conditioning.
In the data center, twelve server rooms of 250 square meters. meters each. They are all the same and stand on the second, third and fourth floors. In each room there are about 88 racks.
Each data center customer has its own keys to equipment racks. DataSpace employees have a duplicate, so if you need to do a simple operation with iron: replace the disk, switch the connection, reconfigure the equipment, do not need to send your specialist. You can ask for this on site engineers - this is a normal practice, which is provided as part of customer support for the data center.
The photo below shows a lifting table on which servers, switches and other equipment are moved. Without such "carts" in any way - the weight of the individual chassis with disks can be tens of kilograms. If a large supply of iron has arrived, data center employees can provide these carts (and other working tools) for use.
Next, we show the iron itself . Let's start with the storage system NetApp E2860. The maximum number of supported disks is 60 pieces. So far, it employs 30 slots - there are 8-terabyte drives. We also have an earlier model of storage - NetApp E2760.
These are rather slow systems, because they are used as cold storage — backups are there.
Behind the NetApp E2860 are located the controllers and power supply units.
Another of the installed storage systems is NetApp FAS2554 with a DS4246 expansion shelf. There are 24 SATA disks in it. Each has a volume of 4 TB. This storage is also set aside for backups and “slow” datastores.
Cisco UCS FI 6296 — Switches from Cisco UCS. These are central devices that are responsible for connecting all servers in a rack to 10GbE and Fiber Channel networks. The system has 48 fixed ports and slots for three expansion modules, each of which adds another 16 Ethernet ports. At the same time, the ports are multifunctional - they support the FCoE (Fiber Channel over Ethernet) technology.
The Cisco UCS 5108 server baskets with B200 M3, B200 M4 and B480 M5 server blades are connected to Cisco UCS FI 6296. In particular, the B200 M3 model has two 10-core Intel Xeon E5-2690 v2 processors, and the B200 M4 has two 20-core Intel Xeon E5-2698 v4 processors.
Here you can see the NetApp E2860 installed at the top of the rack and two Cisco UCS 5108 baskets at the bottom.
Storage NetApp AFF300, specially optimized for working with SSD-drives. Above it is a DS224C shelf with twenty-four 7.6 TB SSDs.
In the photo above - NetApp FAS8040 DSS with DS2246 expansion shelves. These are hybrid shelves with 1.2 TB SAS disks. The arrays of the FAS8000 series also have multiprocessor CPUs from Intel and NVRAM. They guarantee a system availability level of at least 99.999%.
This storage system uses SSDs that act as a flash pool.
This is the look of NetApp FAS8040 and DS2246 expansion shelves at the back with switching wires.
And finally, NetApp CN1610 cluster switches are in our racks. They are used to integrate the FAS8040, FAS8080, AFF300 controllers into a single cluster of storage systems (Clustered ONTAP).
With this we finish our photo tour of the cloud 1cloud.
If you have questions about the structure and architecture of the cloud, you can email us: support@1cloud.ru . Our experts will help and advise.
Attention - under the cut a lot of photos.
We have been providing virtual infrastructure rental services since 2012 in 1cloud. Our project began work in St. Petersburg, but over time we expanded our geography. Today, our hardware is located in four data centers: Xelent , Dataspace , Ahost and beCloud . The first is in St. Petersburg, the second is in Moscow, the third is in Almaty (Kazakhstan), the fourth is in Minsk (Belarus).
All these data centers have approximately the same equipment. And then we will tell you how it is located in the Moscow data center.
Data Center Security
Let's start the show from the outer perimeter of the DataSpace data center. The building is surrounded by a five-meter fine-mesh fence with rotating spikes on the top and anti-digging protection. Passages to the territory are closed with full-height turnstiles. Next to them are security posts.
To get "over the fence", you need to present a contactless key and pass identification at the checkpoint. If an employee has forgotten his ID (passport), he will have to go after him home.
The next "frontier" - armored airlock cabins in the reception area. In the booths are access card readers and palm print scanners.
Between the cabins you can see two Tier III signs. Dataspace is the first data center in Russia that has passed three stages of Tier III certification from the Uptime Institute.
In the photo above - a retina scanner. They are installed at the entrance to all rooms with critical equipment of the engineering infrastructure of the processing center. In addition to scanners, access is still controlled by electronic keys.
Also throughout the data center is video surveillance. Over the whole complex, more than a hundred video cameras have been installed, records from which are stored for at least five years.
The data center provides the ability to organize the allocated protected space (cell).
Our equipment is located in the usual racks DataSpace. We rent the rack entirely. We have a large amount of equipment, so only our specialists should have access to it.
Power and Cooling
One of the most noticeable nuances in the hall is grounding. The bus is copper and connected to an external ground loop .
According to the requirements of the data center, all equipment that requires a separate ground line for its normal operation, in accordance with GOST 12.1.030-81 (labor safety standards system) , is grounded . Work on connecting equipment to the power supply network and to the protective earthing tires is carried out exclusively by data center employees. All wires are marked.
The chaotic distribution of cables leads to loss of airflow, which can cause overheating of the equipment and disable it. To eliminate overheating (and meet the requirements of corporate quality standards of work performed) in DataSpace cables are laid in special trays. As a bonus, effective cable management helps reduce the energy consumption of air conditioning systems.
As standard, each rack receives two independent protected power supply beams. Power cables are laid under the raised floor, whose height is 60 cm. The raised floor can be transformed, change the location of the plates. This makes it possible, if necessary, to rearrange racks with equipment.
In the machine rooms of the data center organized cold corridors. Of these, the equipment receives the air for cooling and blows it towards the hot corridor. The cold corridor is separated from the hot to avoid mixing air and, again, reduce the cost of air conditioning.
By the way, the parameter of energy efficiency of the DataSpace data center (this parameter is called PUE) is 1.5. It is equal to the ratio of the total power consumed by the data center to the power consumed by the iron in the racks. For comparison, the average PUE ratio in Europe is 1.8.In addition to cable management and cold corridors, free-cooling technology helps achieve such an indicator of energy efficiency. The data center shuts off chillers in the winter and cools the engine rooms due to the cold air from the street. This can significantly reduce energy waste and reduce electricity bills.
Racks and equipment
In the data center, twelve server rooms of 250 square meters. meters each. They are all the same and stand on the second, third and fourth floors. In each room there are about 88 racks.
Each data center customer has its own keys to equipment racks. DataSpace employees have a duplicate, so if you need to do a simple operation with iron: replace the disk, switch the connection, reconfigure the equipment, do not need to send your specialist. You can ask for this on site engineers - this is a normal practice, which is provided as part of customer support for the data center.
The photo below shows a lifting table on which servers, switches and other equipment are moved. Without such "carts" in any way - the weight of the individual chassis with disks can be tens of kilograms. If a large supply of iron has arrived, data center employees can provide these carts (and other working tools) for use.
Next, we show the iron itself . Let's start with the storage system NetApp E2860. The maximum number of supported disks is 60 pieces. So far, it employs 30 slots - there are 8-terabyte drives. We also have an earlier model of storage - NetApp E2760.
These are rather slow systems, because they are used as cold storage — backups are there.
Behind the NetApp E2860 are located the controllers and power supply units.
Another of the installed storage systems is NetApp FAS2554 with a DS4246 expansion shelf. There are 24 SATA disks in it. Each has a volume of 4 TB. This storage is also set aside for backups and “slow” datastores.
Cisco UCS FI 6296 — Switches from Cisco UCS. These are central devices that are responsible for connecting all servers in a rack to 10GbE and Fiber Channel networks. The system has 48 fixed ports and slots for three expansion modules, each of which adds another 16 Ethernet ports. At the same time, the ports are multifunctional - they support the FCoE (Fiber Channel over Ethernet) technology.
The Cisco UCS 5108 server baskets with B200 M3, B200 M4 and B480 M5 server blades are connected to Cisco UCS FI 6296. In particular, the B200 M3 model has two 10-core Intel Xeon E5-2690 v2 processors, and the B200 M4 has two 20-core Intel Xeon E5-2698 v4 processors.
Here you can see the NetApp E2860 installed at the top of the rack and two Cisco UCS 5108 baskets at the bottom.
Storage NetApp AFF300, specially optimized for working with SSD-drives. Above it is a DS224C shelf with twenty-four 7.6 TB SSDs.
In total, NetApp AFF300 can handle up to 30 PB of disk space on fast hard drives.The system is controlled by two Intel Broadwell-DE 16-core processors. Their frequency is 1.70 GHz. Each controller has four 32 GB RDIMM ECC memory modules (this is 128 GB in total). 120 GB allocated for the cache, and the remaining eight - for non-volatile memory NVRAM.
In the photo above - NetApp FAS8040 DSS with DS2246 expansion shelves. These are hybrid shelves with 1.2 TB SAS disks. The arrays of the FAS8000 series also have multiprocessor CPUs from Intel and NVRAM. They guarantee a system availability level of at least 99.999%.
This storage system uses SSDs that act as a flash pool.
This is the look of NetApp FAS8040 and DS2246 expansion shelves at the back with switching wires.
And finally, NetApp CN1610 cluster switches are in our racks. They are used to integrate the FAS8040, FAS8080, AFF300 controllers into a single cluster of storage systems (Clustered ONTAP).
With this we finish our photo tour of the cloud 1cloud.
If you have questions about the structure and architecture of the cloud, you can email us: support@1cloud.ru . Our experts will help and advise.