The architecture of a highload project using a web consultant as an example

    Our team is engaged in remote server administration, and not so long ago, representatives of the WebConsult service approached us with the task of building an easily scalable server architecture that will withstand heavy loads. We decided that maybe it will be interesting to Habrahabr users who are somehow connected with the administration of Highload projects. The project turned out to be fast-growing and the structure available at that time was already working at its limit, so we had to launch a new one in accelerated mode.

    image



    To do this quickly, we tried to maximize the distribution of tasks between all our administrators. To do this, we used the project and task management system - Redmine, and there we also kept in touch with the project developers who quickly answered our questions and performed the necessary improvements in the code.
    To understand the essence of the task before us, let's figure out what the principle of the project is. Webconsultis a system by which visitors to the site can contact the store or company manager and ask their questions in real time. The managers, for their part, use two options for consulting - through the panel open in the browser or through any familiar Jabber-client. In turn, visitors to sites where the system is installed see the consultant call button, which is loaded from the WebConsult server (which means that the speed of the service is very important). Since the button loads several million times a day, this creates a significant load on web servers. Also, the major part of the load is precisely the operators who use the main way of consulting - through the web client. In this case,

    Balancing

    Balancing scheme

    Let's move on to the architecture we built. The basis of everything is balancing at the DNS level (round-robin), which transmits traffic on two fronts (this is also convenient for cases when one of the servers is momentarily unavailable - modern browsers transfer the request automatically to the second server). Fronts, in turn, proxy traffic using nginx to HTTP servers - web1, web2, web3, and so on.

    Virtualization

    For each individual task, we tried to create our own OpenVZ container in order to isolate the logical parts of the project from each other, as well as to be able to quickly allocate the container to a separate physical server. Why we chose this particular virtualization, we described in one of the previous articles .

    NGINX Web Server

    Loading and issuing statics

    In order to save traffic and optimize the load, all passing statics are cached by nginx tools. In the previous architecture, NFS was used to store files and source code, but it was decided to abandon it because it had an additional load. Instead, we began to catch requests for loading statics (consultant avatars, company logos) and redirect them to one server. NGINX is configured so that if it does not find pictures locally, it searches for them on other web servers using try_files $ uri servers , and enumeration of web servers is specified in the named servers location . Also, statics is synchronized several times a day with other servers, which makes them available locally.

    Jabber

    An important part of the WebConsult project is working with clients on the site through Jabber. As an XMPP server, Openfire is used with a number of self-written modules that allow you to integrate with the internal structure of the project and transfer messages from Jabber to HTTP and vice versa. For the convenience of customers, the task was to make it possible to connect to the jabber server not only through the direct domain jabber.consultsystems.ru, but also through the main domain consultsystems.ru. For this it was necessary to organize port proxying. We decided to replace the previously used rinetd with a more advanced solution and applied HAProxy for this task, which forwards ports 5222, 5223 in TCP mode.

    To developers

    Since WebConsult- a project that is constantly being finalized and expanded, it was necessary to come up with a solution that allows developers to see the changes made on the server in their own sandbox, so that customers notice them only after everything has been tested and the final deployment has been completed. For this, a separate OpenVZ container was created with a minimum number of allocated resources, where a version of the site for testing and debugging was launched. GIT was chosen as a version control system, as a familiar and modern solution. As a result, it turned out that developers can introduce new functions, test them on a separate server, and after everything is ready to deploy an updated version of the project in production, which virtually eliminates the possibility of failure and errors in the working copy of the project. Also, for convenience, we wrote a small plugin for Redmine,

    Database

    As the database server, we chose Percona Server, as a more productive version of the mysql server.
    Memcached is used for central storage of user sessions. Access to it is open only to web iptables rules.

    Scaling

    As a result, we have a structure that can be easily scaled horizontally by adding new web servers by “cloning” an existing OpenVZ container and transferring it to a new machine. As for the scalability of the database, at the moment, one powerful server is coping with everything, and replication will be applied in the future. Also, some of the "heavy" functions for which MySQL is currently used will be transferred to NoSQL solutions, for example, Redis, which will seriously load the servers. Another very important task at the moment is the transfer of chat to web sockets, in order to refuse to perform regular HTTP requests and instead keep a constant connection to the server - work in this direction is already underway.

    Thank you for your attention, if you have questions or suggestions, we will be happy to discuss in the comments.

    Also popular now: