Docker + Laravel = ❤

    laravel-in-docker


    In this article, I will talk about my experience in wrapping up a Laravel application in a Docker container, so that the frontend and backend developers could work with it locally, and launching it in production was as simple as possible. Also, CI will automatically run static code analyzers, phpunittests, assemble images.


    "What is the difficulty?" - you can say, and you will be partly right. The fact is that quite a lot of discussions in the Russian-speaking and English-speaking community are devoted to this topic, and I would conditionally divide almost all the studied threads into the following categories:


    • "I use docker for local development. I put laradock and I don’t know the troubles." Cool, but what about the automation and production launch?
    • "I collect one container (monolith) on the base fedora:latest(~ 230 Mb), put all the services in it (nginx, bd, cache, etc), start everything with the supervisor inside." Too great, easy to run, but what about the ideology of "one container - one process"? How are things going with balancing and process control? What is the size of the image?
    • "Here you have the pieces of configs, we season with extracts from sh-scripts, add magical env-values, use it." Thank you, but how about at least one living example that I could fork and play a full game?

    Everything that you read below is a subjective experience that does not claim to be the ultimate truth. If you have additions or indications of inaccuracy - welcome to comments.


    For the impatient - a link to the repository , which you can incline to run the Laravel application with one command. It is also not difficult to run it on the same rancher , correctly linking the containers, or use the grocery option docker-compose.ymlas a starting point.

    Part theoretical


    What tools will we use in our work, and what will we do with accents? First of all, we will need the ones installed on the host:


    • docker - at the time of this writing, used the version 18.06.1-ce
    • docker-compose- it does an excellent job with linking containers and storing necessary environment values; version1.22.0
    • make - you might be surprised, but he perfectly fits into the context of working with the docker

    You can put dockeron- debianlike systems as a team curl -fsSL get.docker.com | sudo sh, but docker-composebetter put it with the help pip, since the most recent versions live in its repositories ( aptas a rule, they lag far behind).

    This list of dependencies can be completed. What will you use to work with the source - phpstorm, netbeansor trushny vim- only for you to decide.


    Next is an improvised QA in the context (I’m not afraid of the word) of image design:


    • Q: Base image - which one is better to choose?


    • A: The one that is "thinner", without excesses. On the basis of (~ 5 Mb) you can collect everything your heart desires, but most likely you will have to play around with assembling the services from the sources. As an alternative - (~ 30 Mb) . Or use the one that is most often used on your projects.alpinejessie-slim


    • Q: Why is image weight important?


    • A: Decrease in traffic volume, decrease in probability of an error at downloading (less data - less probability), decrease in consumed space. The rule "Reliability is Reliable" (© "Snatch") does not work very well here.


    • Q: But my friend %friend_name%says that a "monolithic" image with all dependencies is the best way.


    • A: Let's just count. The application has 3 dependencies - PG, Redis, PHP. And you wanted to test how it will behave in bundles of different versions of these dependencies. PG - versions 9.6 and 10, Redis - 3.2 and 4.0, PHP - 7.0 and 7.2. If each dependency is a separate image - you will need 6 pieces of them, which you don’t even need to collect - everything is ready and lies on hub.docker.com. If, for ideological reasons, all the dependencies are “packed” in one container, will you have to reassemble it with pens ... 8 times? Now add a condition that you still want to opcacheplay with . In the case of decomposition, it is simply a change in the tags of the images used. Monolith is easier to run and maintain, but this is a road to nowhere.


    • Q: Why is a container supervisor evil?


    • A: Because PID 1. You do not want an abundance of problems with zombie processes and be able to flexibly "add capacity" where it is needed - try to run one process per container. Peculiar exceptions are nginxwith their workers and php-fpmwho tend to produce processes, but this has to be tolerated (moreover, they are not bad at responding to SIGTERM, quite correctly “killing” their workers). Having run all the demons by the supervisor - in fact, you probably doom yourself to problems. Although, in some cases, it is difficult to manage without it, but these are exceptions.



    Having decided on the main approaches let's move on to our application. It should be able to:


    • web|api- to give statics by forces nginx, and to generate dynamic content by forcesfpm
    • scheduler - run native task scheduler
    • queue - process tasks from queues

    The basic set, which, if necessary, can be expanded. Now let's move on to the images that we have to collect in order for our application to “take off” (their code names are given in brackets):


    • PHP + PHP-FPM( app ) - the environment in which our code will be executed. Since the PHP and FPM versions will be the same for us - we collect them in one image. So with configs it is easier to manage, and the composition of the packages will be identical. Of course - FPM and application processes will run in different containers.
    • nginx( nginx ) - that would not bother with the delivery of configs and optional modules for nginx- we will collect a separate image with it. Since it is a separate service, it has its own docker-file and its context.
    • Application sources ( sources ) - the source will be delivered using a separate image, mounting volumethem into the container with the app. Base image - alpineinside, only source codes with installed dependencies and assets collected using webpack (build artifacts)

    The rest of the development services are launched in containers, pulling them off hub.docker.com; in production, they are running on separate servers, clustered together. All that remains for us is to tell the application (through the environment) at which addresses / ports and with what details it is necessary to knock them. Even cooler is to use service-discovery for these purposes, but this is not about this time.


    Having decided on the part of the theoretical - I propose to proceed to the next part.


    Part practical


    I propose to organize the files in the repository as follows:


    .
    ├── docker  # Директория для хранения докер-файлов необходимых сервисов
    │   ├── app
    │   │   ├── Dockerfile
    │   │   └── ...
    │   ├── nginx
    │   │   ├── Dockerfile
    │   │   └── ...
    │   └── sources
    │       ├── Dockerfile
    │       └── ...
    ├── src  # Исходники приложения
    │   ├── app
    │   ├── bootstrap
    │   ├── config
    │   ├── artisan
    │   └── ...
    ├── docker-compose.yml  # Compose-конфиг для локальной разработки
    ├── Makefile
    ├── CHANGELOG.md
    └── README.md

    You can view the structure and files by clicking on this link .

    To build a service, you can use the command:


    $ docker build \
      --tag %local_image_name% \
      -f ./docker/%service_directory%/Dockerfile ./docker/%service_directory%

    The only difference is the assembly of the image with the source code - it is necessary to specify the assembly context (the last argument) to be equal ./src.


    Naming of images in the local registry recommend using the ones that use docker-composethe default, as follows: %root_directory_name%_%service_name%. If the project directory is called my-awesome-project, and the service has a name redis, then it is better to select the image name (local) my-awesome-project_redisaccordingly.


    To speed up the build process, you can tell the docker to use the cache of a previously compiled image, and the launch option is used for this --cache-from %full_registry_name%. Thus, the docker daemon will look at the launch of this or that instruction in the Dockerfile - has it changed? And if not (the hash will converge) - he will skip the instruction, using the already prepared layer from the image, which you will specify for him to use as a cache. This thing is not so bad to start the reassembly process, especially if nothing has changed :)

    Pay attention to the ENTRYPOINTlaunch scripts of the application containers.

    The image of the environment for launching an application (app) was collected taking into account that it will work not only in production, but also locally, developers need to interact with it effectively. Installing and removing composerdependencies, running unittests, taillogs and using familiar aliases ( php /app/artisanart, composerc) should be without any discomfort. Moreover, it will also be used to run unittests and static code analyzers ( phpstanin our case) on CI. That is why its Dockerfile, for example, contains a line of installation xdebug, but the module itself is not included (it is turned on only using CI).


    Also for composerglobally put a package hirak/prestissimothat will strongly start the installation process for all dependencies.

    In production, we mount inside it into a directory the /appcontents of the directory /srcfrom the source image (sources). For development, we "prokidyvayem" the local directory with the source code of the application ( -v "$(pwd)/src:/app:rw").


    And here lies one difficulty - it is the access rights to files that are created from the container. The fact is that by default the processes running inside the container are run from the root ( root:root), the files created by these processes (cache, logs, sessions, etc) are also, and as a result, you can’t do anything locally with them, not performing sudo chown -R $(id -u):$(id -g) /path/to/sources.


    As one of the solutions is to use fixuid , but this solution is straightforward "so-so." The best way seemed to me probros local USER_IDand its GROUP_IDinside the container, and start processes with these values . By default, substituting values 1000:1000(default values ​​for the first local user) got rid of the call $(id -u):$(id -g), and if necessary, you can always override them ( $ USER_ID=666 docker-compose up -d) or insert them into the .envdocker-compose file.


    Also, when running locally, php-fpmdo not forget to disconnect from it opcache- otherwise a bunch of "what the hell is this!" you will be provided.


    For the "direct" connection to redis and postgres - I threw additional ports "out" ( 16379and 15432respectively), so there is no problem in principle to "connect and see what and how it really is".


    appI keep the container with the codename running ( --command keep-alive.sh) in order to conveniently access the application.


    Here are some examples of solving "everyday" problems with docker-compose:


    OperationExecutable command
    Install composerpackage$ docker-compose exec app composer require package/name
    Running phpunit$ docker-compose exec app php ./vendor/bin/phpunit --no-coverage
    Installing all node dependencies$ docker-compose run --rm node npm install
    Install node-package$ docker-compose run --rm node npm i package_name
    Launch of live asset rebuild$ docker-compose run --rm node npm run watch

    All startup details can be found in the docker-compose.yml file .


    Choimake alive!


    It’s boring to fill the same commands every time after the second time, and since programmers are by their nature lazy creatures, let's take care of their “automation”. Keeping a set of shscripts is an option, but not as attractive as one Makefile, especially since its applicability in modern development is greatly underestimated.


    You can find the complete Russian-language manual on it at this link .

    Let's see what the launch looks like makein the root of the repository:


    [user@host ~/projects/app] $ make
      help            Show this help
      app-pull        Application - pull latest Docker image (from remote registry)
      app             Application - build Docker image locally
      app-push        Application - tag and push Docker image into remote registry
      sources-pull    Sources - pull latest Docker image (from remote registry)
      sources         Sources - build Docker image locally
      sources-push    Sources - tag and push Docker image into remote registry
      nginx-pull      Nginx - pull latest Docker image (from remote registry)
      nginx           Nginx - build Docker image locally
      nginx-push      Nginx - tag and push Docker image into remote registry
      pull            Pull all Docker images (from remote registry)
      build           Build all Docker images
      push            Tag and push all Docker images into remote registry
      login           Log in to a remote Docker registry
      clean           Remove images from local registry
      --------------- ---------------
      up              Start all containers (in background) for development
      down            Stop all started for development containers
      restart         Restart all started for development containers
      shell           Start shell into application container
      install         Install application dependencies into application container
      watch           Start watching assets for changes (node)
      init            Make full application initialization (install, seed, build assets)
      test            Execute application tests
      Allowed for overriding next properties:
        PULL_TAG - Tag for pulling images before building own
                  ('latest' by default)
        PUBLISH_TAGS - Tags list for building and pushing into remote registry
                       (delimiter - single space, 'latest' by default)
      Usage example:
        make PULL_TAG='v1.2.3' PUBLISH_TAGS='latest v1.2.3 test-tag' app-push

    He is very good at goal dependency. For example, to run watch( docker-compose run --rm node npm run watch) it is necessary that the application be "raised" - you just need to specify the target upas dependent - and you can not worry about what you forget to do before the call watch- makeit will do everything for you. The same applies to running tests and static analyzers, for example, before committing changes - run make testand all the magic will happen for you!


    Whether it is worth saying that for the assembly of images, their downloading, instructions --cache-fromand all - all - you should not worry any more?


    You can view the content Makefileat this link .


    Part automatic


    Let's get to the final part of this article - this is an automation of the process of updating images in the Docker Registry. Although in my example GitLab CI is used - to transfer the idea to another integration service, I think it will be quite possible.


    First of all, we define and name the image tags used:


    Tag namePurpose
    latestImages collected from the branch master.
    The state of the code is the most "fresh", but not yet ready to be released.
    some-branch-nameImages collected on brunch some-branch-name.
    Thus, we can “roll out” changes on any environment that were implemented only within the framework of a specific brunch before merging them with the masterbranch — it is enough to “draw out” images with this tag.
    And - yes, the changes can affect both the code and the images of all services in general!
    vX.X.XActually, the release of the application (use to deploy a specific version)
    stableAlias, for the tag with the most recent release (use to deploy the most recent stable version)

    The release is done by posting a tag in a git format vX.X.X.


    To speed up the assembly, directory caching ./src/vendorand ./src/node_modules+ --cache-fromfor is used docker build, and consists of the following steps ( stages):


    Stage name        Purpose
    prepareThe preparatory stage - the assembly of images of all services except the source image
    testTesting of the application (launch phpunit, static code analyzers) using images collected at the prepare stage
    buildInstall all composerthe dependencies ( --no-dev), the assembly assetsforces webpackand build an image of the source including the artifacts produced ( vendor/*, app.js, app.css)

    pipelines screenshot


    Build on-the- masterfly, producing pushwith tags latestandmaster

    On average, all the assembly steps take 4 minutes , which is a pretty good result (parallel execution of tasks is our everything).


    You can familiarize yourself with the contents of the configuration ( .gitlab-ci.yml) of the collector through this link .


    Instead of conclusion


    As you can see, it’s Laravelnot so difficult to organize work with a php application (by example ) using Docker. As a test, you can fork the repository , and replacing all the entries tarampampam/laravel-in-dockerwith your own - try everything "live" on your own.


    To start locally - execute only 2 commands:


    $ git clone https://gitlab.com/tarampampam/laravel-in-docker.git ./laravel-in-docker && cd$_
    $ make init

    Then open http://127.0.0.1:9999in your favorite browser.


    … Take this opportunity to


    At the moment I am working on a TL autocode project, and we are looking for talented php developers and system administrators (the development office is located in Yekaterinburg). If you consider yourself to be the first or second - write a letter to our HR with the text "I want to develop a team, resume:% link_in_resume%" on electromail hr@avtocod.ru, we help with relocation.


    Also popular now: