Rutube 2009-2015: the history of our iron
7 years have passed since Rutube became part of Gazprom Media Holding and a new stage in the development of the project began. In this article we will talk about how we got the project at the end of 2008, and how it changed over the course of 7 years in terms of hardware. Under the cat you will find a fascinating story and many many pictures (carefully, traffic!), So poke at Fichu (our office cat) and go!
At the end of 2008, Gazprom-Media Holding acquired Rutube - code and infrastructure. The technical team, which at that time consisted of a technical director, system administrator and technical specialist (“The computer asks to click“ Enikey, ”where is it?), Got at its disposal several racks with equipment in the data centers“ M10 ”,“ COMSTAR-Direct "And the Kurchatnik."
The racks looked something like this:
With longing, we recall the data center “M10”, in which quick-release rails could be installed only with the help of pliers and a light tapping with a hammer. On the other hand, Supermicro bolt-mounted rails were perfectly fixed in the racks, and the racks themselves were ready to withstand full filling with UPS devices.
What was the cost of the location of the racks in the COMSTAR-Direct data center, when the back door could not fully open, resting against the wall, and had to remove the door to crawl up to the slide from the side of the hinges of the rack. Even some kind of nostalgia remained from this valuable experience!
The equipment consisted of HP ProLiant DL140 G3 and HP ProLiant DL320 G5 servers, as well as Supermicro servers based on PDSMU, X7SBi motherboards. The role of the switches was performed by Allied Telesis and D-Link.
By the way, some of this equipment we have already decommissioned and sold, and some are still on sale - please contact!
Almost immediately, it became clear that the current capacity was not enough for the development of the project, and it was decided to purchase several dozen Supermicro servers based on the X7DWU motherboard. The Cisco Catalyst 3750 switches were used as the network component. Since the beginning of 2009, we have installed this equipment in the new Synterra data center and in the M10.
Content storage began to be transferred to an industrial data storage system. The choice fell on NetApp: FAS3140 controllers with DS14 disk shelves. As a result, storage was expanded by the FAS3170 and FAS3270 series controllers using more modern DS4243 shelves.
By the summer of 2009, an “unexpected” problem had arisen - since no one was specifically responsible for the maintenance of data centers, everyone who put hardware there or made a switch did not feel like a host, but a guest. From here drew the jungle of wires and randomly scattered servers.
It was decided to assign responsibility for this area (a hundred servers, a dozen racks and switches) to a dedicated employee. Since then, the infrastructure has grown to five hundred servers, several dozens of switches and racks, an employee has turned into a department of three people.
In parallel, the purchase of new network equipment was held - the choice was made on Juniper (Juniper EX8208, EX4200, EX3200, EX2200 switches and MX480 router). And in the fall of 2009, when we received new equipment, we carried out large-scale work on putting things in order (in the Synterra data center) and commissioning new equipment with a minimum interruption of service.
We installed new network equipment, summed up the elements of the new SCS (at that time we were still embroidering patch panels).
They decorated the garland with temporary patch cords to minimize interruptions in service.
As a result, we came to this order. The End-of-Row scheme is operational, but has its clear drawbacks. A few years later, expanding the network of equipment, we switched to the Top-of-Rack scheme.
The final transfer to new equipment took place on November 4 - on the Day of National Unity.
At the end of 2009, we launched our site at the M9 data center. The main goal was to gain access to those hundreds of operators that are present at the Nine (even now in Moscow there is no real alternative to this institution). Here we installed the Juniper MX480 router, Juniper EX4200, EX2200 switches and the brand new Dell PowerEdge R410 servers.
Juniper MX480
Juniper EX2200, EX4200
Then it still seemed that the 52U racks on the "M9" are dimensionless, and now we barely fit in them.
Previously, we didn’t accept servers directly at the data center, but at the office, where the servers were tested and initially configured before being sent to the data center.
A cozy, spacious server room without windows and an air conditioning system, in which, as a bonus, there was a supply manager who constantly offered to dine with a "stick" for the company.
Since 2010, we have been actively growing: new projects, new equipment, new racks in the data center. In mid-2011, colleagues noticed that the employee in charge of iron and data centers does not appear in the office even on the day of advance payment and salary (fortunately, they come to the card). We missed you!
Minute of fame (I realized that I write more for myself than for Habr)!
But nobody was going to slow down the pace. In the new M77 data center, we launched a new project (NTVPLUS.TV) and started building the second core of RUTUBE.RU, so that if the main data center falls, RUTUB will continue to work.
A small batch of Sun Fire X4170 × 64 servers.
Juniper EX8216, EX4200, EX2200 switches and a bit of NetApp.
The next competition "manage to squeeze 100,500 patch cords before the start of the project."
With SCS completed and the data center launched.
That's NetApp FAS3170 with shelves DS4243 is gradually filled with content.
Meanwhile, our system administrators are completing the setup of the Sun Fire X4170 × 64.
And the “lead by wire” completes restoring beauty (AKA order).
The year 2011 began with the continuation of the expansion of the second core in the M77 data center, when they received a new batch of Dell PowerEdge R410 servers and, as part of a new project (from a technology partner), servers based on the Quanta platform.
10G switches appeared more and more in the network infrastructure - Extreme Summit X650-24x became the first sign. Then there were the more interesting Extreme Summit X670-48x.
That's what was missing in childhood to build your own cardboard house.
Not having time to breathe out after finishing work in the M77 data center, they relocated to the Synterra data center, where it was necessary to commission the Juniper EX8216 instead of the EX8208 (more boards had to be installed to connect operators and servers).
At the same time, we began the installation of our first DWDM complex (active version), connecting the three main data centers “M9”, “Synterra” and “M77” over dark optics. Here we were helped by a domestic manufacturer - T8.
Juniper EX8216 and DWDM
In 2012, we had a department responsible for data centers and hardware (that is, instead of one employee, there were two). Prior to this, of course, more than one person performed all the work - network and system administrators actively helped him. Since then, the department has been trying to balance between order, unification, beauty and operational work within the framework of project development tasks.
A new stage of development began in 2014, when they began to change storage systems, optimize the server infrastructure, launching new caching servers, and (already in 2015) replaced all the main network equipment, since the old one no longer met current needs.
NetApp storage has served us faithfully for 5 years. During this time, we realized that the maintenance and expansion of storage systems requires expenditures that are not commensurate with other subsystems. We began the search for a more rational solution, which ended with the phased implementation of storage systems of our own design (the transition began in early 2014, and ended in the fall of 2015). Now the storage consists of 12-disk servers (Supermicro, Quanta) and software written by our developers. This was a great solution for us, and at the moment NetApp has been removed from support and we use part of it as storage for various technological needs.
At the beginning of 2014, they decided to modernize the caching system, which at that time represented a hundred servers with 4 gigabit interfaces and a hybrid disk subsystem (SAS + SSD).
We decided to separate the servers that will deliver the “hot” (actively viewed) content into a separate cluster. These servers were Supermicro on the X9DRD-EF motherboard with two Intel Xeon E5-2660 v2 processors, 128 GB RAM, 480 GB SSD and 4 Intel X520-DA2 network cards. It was experimentally established that such a server gives 65-70 Gbit / s without any problems (the maximum was 77 Gbit / s).
In mid-2014, we replaced active DWDM with passive. This allowed us to greatly increase its resources and begin to “breed” operators connected in one data center to other sites, reducing the dependence on failure of specific border equipment.
By the end of 2014, a new cluster for “cold” content was launched, which replaced the remaining servers with an aggregate of 4 Gb / s. Once again, Supermicro chose our X9DRD-EF motherboard, this time with two Intel Xeon E5-2620 v2 processors, 128 GB RAM, 12 × 960 GB SSD and 2 Intel X520-DA2 network cards. Each node of this cluster is capable of holding a load of up to 35 Gbit / s.
Naturally, the matter is not only in well-chosen hardware, but also in wonderful self-written modules for segmentation, written by our system miracle architect and a wonderful video balancer created by the development team. Work to clarify the maximum capabilities of this platform continues - there are slots for SSD and network cards.
The year 2015 was marked by the replacement of all the main network equipment, including the transition from hardware load balancers to software (Linux + x86). Instead of Juniper EX8216 switches, for the most part EX4200, Extreme Summit X650-24x and X670-48x, Cisco ASR 9912 routers and Cisco Nexus 9508, Cisco Nexus 3172PQ and Cisco Nexus 3048 switches came into service. Of course, the development of our network subsystem is an occasion for a separate large article .
After replacing the old server hardware and network, the racks do not look as good again as we would like. In the foreseeable future, we’ll finish putting things in order and publish a colorful article with photos of how we are entering 2016.
Start
At the end of 2008, Gazprom-Media Holding acquired Rutube - code and infrastructure. The technical team, which at that time consisted of a technical director, system administrator and technical specialist (“The computer asks to click“ Enikey, ”where is it?), Got at its disposal several racks with equipment in the data centers“ M10 ”,“ COMSTAR-Direct "And the Kurchatnik."
The racks looked something like this:
With longing, we recall the data center “M10”, in which quick-release rails could be installed only with the help of pliers and a light tapping with a hammer. On the other hand, Supermicro bolt-mounted rails were perfectly fixed in the racks, and the racks themselves were ready to withstand full filling with UPS devices.
What was the cost of the location of the racks in the COMSTAR-Direct data center, when the back door could not fully open, resting against the wall, and had to remove the door to crawl up to the slide from the side of the hinges of the rack. Even some kind of nostalgia remained from this valuable experience!
The equipment consisted of HP ProLiant DL140 G3 and HP ProLiant DL320 G5 servers, as well as Supermicro servers based on PDSMU, X7SBi motherboards. The role of the switches was performed by Allied Telesis and D-Link.
By the way, some of this equipment we have already decommissioned and sold, and some are still on sale - please contact!
Development
Almost immediately, it became clear that the current capacity was not enough for the development of the project, and it was decided to purchase several dozen Supermicro servers based on the X7DWU motherboard. The Cisco Catalyst 3750 switches were used as the network component. Since the beginning of 2009, we have installed this equipment in the new Synterra data center and in the M10.
Content storage began to be transferred to an industrial data storage system. The choice fell on NetApp: FAS3140 controllers with DS14 disk shelves. As a result, storage was expanded by the FAS3170 and FAS3270 series controllers using more modern DS4243 shelves.
By the summer of 2009, an “unexpected” problem had arisen - since no one was specifically responsible for the maintenance of data centers, everyone who put hardware there or made a switch did not feel like a host, but a guest. From here drew the jungle of wires and randomly scattered servers.
It was decided to assign responsibility for this area (a hundred servers, a dozen racks and switches) to a dedicated employee. Since then, the infrastructure has grown to five hundred servers, several dozens of switches and racks, an employee has turned into a department of three people.
In parallel, the purchase of new network equipment was held - the choice was made on Juniper (Juniper EX8208, EX4200, EX3200, EX2200 switches and MX480 router). And in the fall of 2009, when we received new equipment, we carried out large-scale work on putting things in order (in the Synterra data center) and commissioning new equipment with a minimum interruption of service.
We installed new network equipment, summed up the elements of the new SCS (at that time we were still embroidering patch panels).
They decorated the garland with temporary patch cords to minimize interruptions in service.
As a result, we came to this order. The End-of-Row scheme is operational, but has its clear drawbacks. A few years later, expanding the network of equipment, we switched to the Top-of-Rack scheme.
The final transfer to new equipment took place on November 4 - on the Day of National Unity.
At the end of 2009, we launched our site at the M9 data center. The main goal was to gain access to those hundreds of operators that are present at the Nine (even now in Moscow there is no real alternative to this institution). Here we installed the Juniper MX480 router, Juniper EX4200, EX2200 switches and the brand new Dell PowerEdge R410 servers.
Juniper MX480
Juniper EX2200, EX4200
Then it still seemed that the 52U racks on the "M9" are dimensionless, and now we barely fit in them.
Previously, we didn’t accept servers directly at the data center, but at the office, where the servers were tested and initially configured before being sent to the data center.
A cozy, spacious server room without windows and an air conditioning system, in which, as a bonus, there was a supply manager who constantly offered to dine with a "stick" for the company.
Since 2010, we have been actively growing: new projects, new equipment, new racks in the data center. In mid-2011, colleagues noticed that the employee in charge of iron and data centers does not appear in the office even on the day of advance payment and salary (fortunately, they come to the card). We missed you!
Minute of fame (I realized that I write more for myself than for Habr)!
But nobody was going to slow down the pace. In the new M77 data center, we launched a new project (NTVPLUS.TV) and started building the second core of RUTUBE.RU, so that if the main data center falls, RUTUB will continue to work.
A small batch of Sun Fire X4170 × 64 servers.
Juniper EX8216, EX4200, EX2200 switches and a bit of NetApp.
The next competition "manage to squeeze 100,500 patch cords before the start of the project."
With SCS completed and the data center launched.
That's NetApp FAS3170 with shelves DS4243 is gradually filled with content.
Meanwhile, our system administrators are completing the setup of the Sun Fire X4170 × 64.
And the “lead by wire” completes restoring beauty (AKA order).
The year 2011 began with the continuation of the expansion of the second core in the M77 data center, when they received a new batch of Dell PowerEdge R410 servers and, as part of a new project (from a technology partner), servers based on the Quanta platform.
10G switches appeared more and more in the network infrastructure - Extreme Summit X650-24x became the first sign. Then there were the more interesting Extreme Summit X670-48x.
That's what was missing in childhood to build your own cardboard house.
Not having time to breathe out after finishing work in the M77 data center, they relocated to the Synterra data center, where it was necessary to commission the Juniper EX8216 instead of the EX8208 (more boards had to be installed to connect operators and servers).
At the same time, we began the installation of our first DWDM complex (active version), connecting the three main data centers “M9”, “Synterra” and “M77” over dark optics. Here we were helped by a domestic manufacturer - T8.
Juniper EX8216 and DWDM
In 2012, we had a department responsible for data centers and hardware (that is, instead of one employee, there were two). Prior to this, of course, more than one person performed all the work - network and system administrators actively helped him. Since then, the department has been trying to balance between order, unification, beauty and operational work within the framework of project development tasks.
Project today
A new stage of development began in 2014, when they began to change storage systems, optimize the server infrastructure, launching new caching servers, and (already in 2015) replaced all the main network equipment, since the old one no longer met current needs.
NetApp storage has served us faithfully for 5 years. During this time, we realized that the maintenance and expansion of storage systems requires expenditures that are not commensurate with other subsystems. We began the search for a more rational solution, which ended with the phased implementation of storage systems of our own design (the transition began in early 2014, and ended in the fall of 2015). Now the storage consists of 12-disk servers (Supermicro, Quanta) and software written by our developers. This was a great solution for us, and at the moment NetApp has been removed from support and we use part of it as storage for various technological needs.
At the beginning of 2014, they decided to modernize the caching system, which at that time represented a hundred servers with 4 gigabit interfaces and a hybrid disk subsystem (SAS + SSD).
We decided to separate the servers that will deliver the “hot” (actively viewed) content into a separate cluster. These servers were Supermicro on the X9DRD-EF motherboard with two Intel Xeon E5-2660 v2 processors, 128 GB RAM, 480 GB SSD and 4 Intel X520-DA2 network cards. It was experimentally established that such a server gives 65-70 Gbit / s without any problems (the maximum was 77 Gbit / s).
In mid-2014, we replaced active DWDM with passive. This allowed us to greatly increase its resources and begin to “breed” operators connected in one data center to other sites, reducing the dependence on failure of specific border equipment.
By the end of 2014, a new cluster for “cold” content was launched, which replaced the remaining servers with an aggregate of 4 Gb / s. Once again, Supermicro chose our X9DRD-EF motherboard, this time with two Intel Xeon E5-2620 v2 processors, 128 GB RAM, 12 × 960 GB SSD and 2 Intel X520-DA2 network cards. Each node of this cluster is capable of holding a load of up to 35 Gbit / s.
Naturally, the matter is not only in well-chosen hardware, but also in wonderful self-written modules for segmentation, written by our system miracle architect and a wonderful video balancer created by the development team. Work to clarify the maximum capabilities of this platform continues - there are slots for SSD and network cards.
The year 2015 was marked by the replacement of all the main network equipment, including the transition from hardware load balancers to software (Linux + x86). Instead of Juniper EX8216 switches, for the most part EX4200, Extreme Summit X650-24x and X670-48x, Cisco ASR 9912 routers and Cisco Nexus 9508, Cisco Nexus 3172PQ and Cisco Nexus 3048 switches came into service. Of course, the development of our network subsystem is an occasion for a separate large article .
After replacing the old server hardware and network, the racks do not look as good again as we would like. In the foreseeable future, we’ll finish putting things in order and publish a colorful article with photos of how we are entering 2016.