Southbridge in Chelyabinsk and Bitrix in Kubernetes

    Sysadminka system administrators meet in Chelyabinsk, and at the last of them I made a report on our solution for working applications on 1C-Bitrix in Kubernetes.


    Bitrix, Kubernetes, Ceph - a great mix?


    I’ll tell you how we put together a working solution from all this.


    Go!



    Mitap was held on April 18 in Chelyabinsk. You can read about our mitaps in Timepad and watch on YouTube .


    If you want to come to us with a report or as a listener - to Wellcome, write to vadim.isakanov@gmail.com and Telegram t.me/vadimisakanov.


    My report


    Kubernetes Bitrix Report Video


    Slides


    Bitrix in Kubernetes Southbridge 1.0 Solution


    I will talk about our solution in the format "for dummies in Kubernetes", as it was done at the meeting. But I suppose that the words Bitrix, Docker, Kubernetes, Ceph are known to you at least at the level of Wikipedia articles.


    What do you have about Bitrix in Kubernetes?


    All over the Internet there is very little information about the operation of applications on Bitrix in Kubernetes.
    I found only such materials:


    Report by Alexander Serbul, 1C-Bitrix, and Anton Tuzlukov from Qsoft:



    I recommend listening to it.


    Development of own solution from the user serkyron on Habré .
    I also found such a solution .


    III ... actually.


    I warn you, we did not check the quality of the solutions using the links above :-)
    By the way, when I prepared our solution, I talked with Alexander Serbul, then there wasn’t his report yet, so in my slides there is the item “Bitrix does not use Kubernetes”.


    But there are already a lot of ready-made Docker images for Bitrix to work in Docker: https://hub.docker.com/search?q=bitrix&type=image


    Is this enough to create a complete Bitrix solution in Kubernetes?
    Not. There are a large number of problems that need to be addressed.


    What are the problems with Bitrix in Kubernetes?


    First - Dockerhub ready-made images are not suitable for Kubernetes


    If we want to build a microservice architecture (and in Kubernetes we usually want), the application in Kubernetes needs to be divided into containers and ensure that each container performs one small function (and does it well). Why only one? In short - the simpler, the more reliable.
    If it’s more authentic, look at this article and video, please: https://habr.com/en/company/southbridge/blog/426637/


    Docker images in Dockerhub are mainly built on the principle of "all in one", so we still had to make our own bike and even make images from scratch.


    Second - the site code is edited from the admin panel


    We created a new section on the site - the code was updated (a directory with the name of the new section was added).


    Changed the component properties from the admin panel - the code has changed.


    Kubernetes “by default” does not know how to work with this, containers must be stateless.


    Reason: each container (sub) in the cluster processes only part of the traffic. If you change the code in only one container (bottom), then in different pods the code will be different, the site will work differently, different versions of the site will be shown to different users. You can’t live like that.


    Third - you need to solve the issue of deployment


    If we have a monolith and one “classic” server, everything is very simple: we deploy a new code base, migrate the database, switch traffic to the new version of the code. Switching occurs instantly.
    If we have a website in Kubernetes, it is sawn into microservices, there are many containers with code. You need to collect containers with the new version of the code, roll them out instead of the old ones, correctly execute the database migration, and ideally do this invisibly to visitors. Fortunately, Kubernetes helps us in this, supporting a whole cloud of different types of deploy.


    Fourth - you need to solve the problem of storing static


    If your site weighs “only” 10 gigabytes and you deploy it entirely in containers, you will get containers weighing 10 gigabytes, which will be deployed forever.
    You need to store the most “heavy” parts of the site outside containers, and the question arises how to do it right


    What is not in our decision


    Quite the entire Bitrix code for microfunctions / microservices is not cut (so that registration is separate, the module of the online store is separate, etc.). We store the entire code base in each container as a whole.


    We also do not store the base in Kubernetes (I nevertheless implemented solutions with the base in Kubernetes for developer environments, but not for production).


    Site administrators will still notice that the site works in Kubernetes. The "system check" function does not work correctly, to edit the site code from the admin panel, you must first click the "I want to edit the code" button.


    We decided on the problems, we decided on the need to implement microservice, the goal is clear - to get a working system for working on Bitrix applications in Kubernetes, preserving both the Bitrix capabilities and the advantages of Kubernetes. We begin implementation.


    Architecture


    A lot of “working” hearths with a web server (workers).
    One under with crown crowns (necessarily only one).
    One upgrade for editing site code from the admin panel (only one is also required).



    We solve issues:


    • Where to store sessions?
    • Where to store the cache?
    • Where to store statics, not to place gigabytes of statics in a heap of containers?
    • How will the database work?

    Docker image


    We start by building a Docker image.


    The ideal option is that we have one universal image, on its basis we get both worker-pods, and pods with brackets, and upgrade pods.


    We made just such an image .


    It includes nginx, apache / php-fpm (can be selected during assembly), msmtp for sending mail, and cron.


    When assembling the image, the full code base of the site is copied to the / app directory (with the exception of those parts that we will place in a separate shared storage).


    Microservice, services


    worker hearths:


    • Container with nginx + apache / php-fpm + msmtp container
    • msmtp did not work out in a separate microservice, Bitrix begins to resent that it cannot send mail directly
    • Each container has a complete code base.
    • Ban on changing code in containers.

    cron under:


    • container with apache, php, cron
    • complete code base included
    • ban on changing code in containers

    upgrade under:


    • container with nginx + apache / php-fpm + msmtp container
    • there is no ban on changing code in containers

    session storage


    Bitrix Cache Storage


    More important: we store passwords for connecting to everything from the database to mail in kubernetes secrets. We get a bonus, passwords are visible only to those to whom we give access to secrets, and not to everyone who has access to the code base of the project.


    Static Storage


    You can use anything: ceph, nfs (but nfs are not recommended for production), network storage from "cloud" providers, etc.


    The storage will need to be connected in containers to the / upload / directory of the site and other directories with static.


    Database


    For simplicity, we recommend moving the base outside of Kubernetes. The base in Kubernetes is a separate complex task, it will make the circuit much more complicated.


    Session Storage


    We use memcached :)


    It does a good job of storing sessions, clusters, and is natively supported as session.save_path in php. Such a system was worked out many times in the classical monolithic architecture, when we built clusters with a large number of web servers. For deployment we use helm.


    $ helm install stable/memcached --name session

    php.ini - here in the image settings are set for storing sessions in memcached


    We used Environment Variables to transfer host data with memcached https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/ .
    This allows you to use the same code in the environments dev, stage, test, prod (the hostnames of memcached in them will be different, so we need to transfer a unique hostname for sessions to each environment).
    Bitrix Cache Storage


    We need a fail-safe storage in which all the hearths could write and from which we could read.


    We also use memcached.
    This solution is recommended by Bitrix themselves.


    $ helm install stable/memcached --name cache

    bitrix / .settings_extra.php - here in Bitrix it is set where we store the cache


    We also use Environment Variables.


    Krontaski


    There are different approaches to doing crontabs in Kubernetes.


    • separate deployment with a hearth
    • cronjob for performing crontask (if it is a web app - with wget https: // $ host $ cronjobname , or kubectl exec inside one of the worker hearths, etc.)
    • etc.

    You can argue about the most correct, but in this case we chose the option "separate deployment with pods for crontask"


    How to do it:


    • add crowns through ConfigMap or via config / addcron
    • in one instance, run a container identical to the worker sub + allow execution of crown tasks in it
    • the same code base is used, thanks to unification the assembly of the container is simple

    What good we get:


    • we have working crontaski in an environment identical to the development environment (docker)
    • Krontaski does not need to be “rewritten” for Kubernetes, they work in the same form and in the same code base as before
    • crown members can add all team members with commit rights to the production branch, and not just admins

    Southbridge K8SDeploy module and editing code from the admin panel


    We were talking about upgrade under?
    And how to direct traffic there?
    Hooray, we wrote a module for this in php :) This is a small classic module for Bitrix. It is not yet publicly available, but we plan to open it.
    The module is installed as a regular module in Bitrix:



    And it looks like this:



    It allows you to set a cookie that identifies the site administrator and allows Kubernetes to send traffic to upgrade under.


    When the changes are completed, you need to press git push, the code changes will be sent to git, then the system will collect the image with the new version of the code and “roll” it across the cluster, replacing the old pods.


    Yes, it’s a little crutch, but at the same time we maintain the microservice architecture and do not take away from Bitrix users a favorite opportunity to correct the code from the admin panel. In the end, this is an option, you can solve the problem of editing code in a different way.


    Helm chart


    To build applications in Kubernetes, we usually use the Helm package manager.
    For our Bitrix solution in Kubernetes, Sergey Bondarev, our lead system administrator, wrote a special Helm chart.


    It builds worker, ugrade, cron hearths, configures ingresses, services, transfers variables from Kubernetes secrets to hearths.


    We store the code in Gitlab, and we also run the Helm assembly from Gitlab.


    In short, it looks like this


    $ helm upgrade --install project .helm --set image=registrygitlab.local/k8s/bitrix -f .helm/values.yaml --wait --timeout 300 --debug --tiller-namespace=production

    Helm also allows you to do a “seamless” rollback, if suddenly something went wrong during the deploy. It's nice when you are not in a panic "fix the code for ftp, because the prod has fallen", and Kubernetes does it automatically, and without downtime.


    Deploy


    Yes, we are fans of Gitlab & Gitlab CI, use it :)
    When committing to Gitlab in the project repository, Gitlab launches a pipeline that will deploy the new version of the environment.


    Stages:


    • build (build a new Docker image)
    • test (test)
    • clean up (remove the test environment)
    • push (send it to the Docker registry)
    • deploy (we deploy the application in Kubernetes via Helm).


    Hurray, we’re ready to introduce it!
    Well, or ask questions, if any.


    So what have we done


    From a technical point of view:


    • dockerized Bitrix;
    • “Cut” Bitrix into containers, each of which performs a minimum of functions;
    • achieved stateless state of containers;
    • solved the problem with updating Bitrix in Kubernetes;
    • all Bitrix functions continued to work (almost all);
    • worked deployment in Kubernetes and rollback between versions.

    From a business perspective:


    • fault tolerance;
    • Kubernetes tools (easy integration with Gitlab CI, seamless deployment, etc);
    • passwords in secrets (visible only to those who are directly granted access to passwords);
    • it is convenient to make additional environments (for development, tests, etc.) inside a single infrastructure.

    Also popular now: