Advantages of dedicated servers over cloud solutions using the Tuffle.com server architecture example

We launched the first version of Tuffle on Selectel's cloud server and hosted it there for a while. It seemed to us that the "cloud server" stands for "pay the actual consumption of resources and forget about the problems of scaling and lack of performance." But the problems made themselves felt ... Since the article is not about Selectel, we will simply list the reasons why we had to look for another solution.

1. Often with high traffic, the server simply shuts down. I had to constantly check the status of the site in order to be able to quickly respond to problems.

2. Limitations in management. Our dedicated server at the same time was a virtual server. As a result, all non-standard operations were performed using the ticket system, and this is terribly inconvenient.

3. With the increase in attendance, the price tag for the server grew in proportion. Which, it would seem, is quite obvious, only cost at a certain moment became completely undemocratic and cast doubt on the cost-effectiveness of cloud solutions.


And once they wrote to us that a DDoS attack is coming from our servers. We checked everything, the alarm turned out to be false, but the unpleasant aftertaste remained.

For all of the above, we decided to organize our own server storage. This is where the fun begins.

Truly dedicated servers

And the solution was this:

1. Adopt a pair of “clean” separate servers
2. Develop a horizontal scaling system
3. Distribute personal tasks between the servers. Sometimes a few tasks.

First of all, we rushed to look for where to get such servers so that the channel was good and the price did not bite. As a result, we stopped at Hetzner and bought 8 servers at once. And you know, it turned out cheaper than the one from Selectel.

Server requirements

To begin with, we drew a table in which we wrote down why we need servers, what characteristics they should have, etc.



With the vertical arrows, we indicated the characteristics that were most important for a particular server.

But, it turned out that with Hetzner you can’t configure the iron filling yourself. Therefore, all of our 8 servers were exactly the same, namely EX40 . By the way, it is worth noting that for non-European countries you can safely subtract from the amount of 19%, this is VAT.

We talked about scalability above. So: 3 main areas in which the ability to quickly scale is important:

1. Application server. At high load, they can not cope.
2. Database
servers 3. Content servers

The rest is more or less stable.

Master-slave replication is excellent for these directions. In order for scaling to work, it needs to be understood. Therefore, we take a marker and draw on the board.



The user logs on to the site and contacts the front-server, which can execute this request (it is called master in our case), or send this request to his friend (slave server), which is no different from master. Therefore, if we are faced with a situation when there are not enough resources of 2 servers, we just need to add another slave server.

A similar pattern is with databases.

Content is a little different. The application must know which server to upload images and videos to and from which server to read. Therefore, we have several master servers. And the application knows where the specific file is. Expanding is also easy, add a server and upload pictures to it. And we already read each file from our server.

The project uses Mongo, it even has its own separate server, on which it feels great.

Server software

The technological stack is not sophisticated, we work according to the classical scheme, tested over the years. At the input, Nginx - processes statics, sends dynamic requests to Apache. The code is written in PHP using the Zend Framework. The database is MySQL. Temporary storage, and in some cases permanent storage - MongoDB.

How does the server determine where to send the user? This is done by Nginx itself, it balances between application servers based on client IP addresses. A request from one IP will always go to the same server. Request balancing is done using haproxy , which had to be installed on each server. In addition, haproxy balances round-robin requests to MySQL.

Of course, we have listed only the basis, a lot of software has been installed that deals with video conversion (FFMPEG), background tasks, etc.

Data

Still, we work with heavy content, so a very hot topic is data backup. We made backups incremental, but that’s not all. For our own safety, we store melon on “third” servers. Plutov already wrote about how backups are configured in Tuffle on his blog .

Monitoring

There are many servers, so Munin helps us monitor them . In it, all parameters are easily tracked, both separately by servers, and in general. Informs about critical points.



Conclusion

For us, this is a positive rejection of cloud solutions. In about six months, our server has never shut down, and Apache did not die. What we wish you.

Also popular now: