Valid SSL domain names for local Docker containers


    Using Docker in the development process has already become the de facto standard. Run the application with all its dependencies, using just one command - it is becoming more and more familiar action. If an application provides access using a web interface or some HTTP API, it is likely that the front-line container forwards its unique (among other applications that you are developing in parallel) to the host port, having knocked on which we can interact with the application in the container .

    And it works fine, until you have a whole zoo of applications, switching between them starts to cause some inconvenience, as you need to remember the scheme and the port, and somewhere to fix which ports for which application you once allocated, so as not there was a collision with time.

    And then you also want to check the work on https - and you have to either use your root certificate, or always use it curl --insecure ..., and when various commands work on applications - the number of zapar starts to increase exponentially.

    Faced with such a problem once again - the thought “Stop it to endure!” Flashed in my head, and the result of working on a couple of days off was a service that solves this problem at the root, which will be discussed below. For the impatient, traditionally - a reference .

    World We will save the reverse proxy

    In an amicable way, we need some kind of domain zone, all sub-domains from which will always resolve localhost ( Fugitive search led me to the domains of the form *, *, *.vcap.meand others, but for them to tie a valid SSL certificate? Having tinker with my root certificate, I managed to start curlwithout errors, but not all browsers correctly accepted it, and continued to throw out the error. In addition - I really did not want to "mess" with SSL.

    "Well, let's go on the other side!" - and a domain with the name was immediately acquired, delegated to CloudFlare, the required resolution was configured (all sub-domains are resolved

    $ dig | grep -v '^;\|^$'    190 IN  A

    After that, certbot was launched in a container that, at the entrance, receiving API KEY from CF using DNS records confirms ownership of the domain, and issues a valid SSL certificate at the output:

    $ docker run \
      --entrypoint="" \
      -v "$(pwd)/cf-config.conf:/cf-credentials:ro" \
      -v "$(pwd)/cert:/out:rw" \
      -v "/etc/passwd:/etc/passwd:ro" \
      -v "/etc/group:/etc/group:ro" \
      certbot/dns-cloudflare:latest sh -c \
        "certbot certonly \
        --dns-cloudflare \
        --dns-cloudflare-credentials '/cf-credentials' \
        -d '*' \
        --non-interactive \
        --agree-tos \
        --email '$CF_EMAIL' \
        --server '' \
        && cp -f /etc/letsencrypt/live/* /out \
        && chown '$(id -u):$(id -g)' /out/*"

    The file ./cf-config.confcontains authorization data on the CF, more information can be found in the documentation for certbot, $CF_EMAIL- the environment variable with your email

    Ok, now we have a valid SSL certificate (even for 3 months, and only for subdomains of the same level). It remains to somehow learn how to proxy all requests that come to lokalhost in the desired container.

    And here we come to the aid of Traefik (spoiler - it is beautiful). By running it locally, flinging a docker socket into its container through the volume, it can proxy requests to the container that it has docker label. Thus, we do not need any additional configuration, except for launching, specify the desired label on the container (and docker network, but when running without docker-compose, even this is not necessary, although very desirable) to which we want to get domain name access and valid SSL !

    Having done all this way, the light saw the docker-container with this most pre-configured Traefik and wildcard SSL certificate (yes, it is public).

    SSL private key in a public container?

    Yes, but I think that it is not scary, as it is on the domain zone, which always resolves the localhost. MitM in this case does not make much sense in principle.

    What to do when the certificate goes rotten?

    Just pull off a fresh image by restarting the container. The project has CI configured, which automatically, once a month (for the time being) updates the certificate and publishes a fresh image.

    I want to try!

    There is nothing easier. First of all, make sure that the local ports 80and 443you are free, and do:

    # Создаём docker-сеть для нашего реверс-прокси
    $ docker network create localhost-tools-network
    # Запускаем сам реверс-прокси
    $ docker run -d --rm \
        -v /var/run/docker.sock:/var/run/docker.sock \
        --network localhost-tools-network \
        --name \
        -p 80:80 -p 443:443 \
    # Запускаем nginx, говоря ему откликаться на ""
    $ docker run -d --rm \
        --network localhost-tools-network \
        --label "" \
        --label "traefik.port=80" \

    And now we can test:

    $ curl -sS | grep Welcome
    <title>Welcome to nginx!</title>
    <h1>Welcome to nginx!</h1>
    $ curl -sS | grep Welcome
    <title>Welcome to nginx!</title>
    <h1>Welcome to nginx!</h1>

    As you can see - it works :)

    Where does the documentation live?

    Everything, as it is not difficult to guess, lives at . Moreover, the muzzle is responsive, and it knows how to look at whether the reverse proxy daemon is running locally, and display a list of containers running and available for interaction (if any).

    How much is?

    Not at all. Totally. Having done this thing for myself and my team, it came to the understanding that it could be useful to other developers / ops. Moreover, only the domain name costs money, everything else is used without the need for payment.

    PS Service is still in beta, therefore - if they find any shortcomings, typos, etc. - just scribble in lichku . The programming and website development hubs are indicated for the reason that this approach may be useful primarily in these industries.

    Also popular now: