Pack ASP.NET Core Applications with Docker

    ASP.NET Core applications are truly cross-platform and can be run in Nix, and, accordingly, in Docker. Let's see how they can be packaged to be deployed on Linux and used in conjunction with Nginx. Details under the cut!



    Note: we continue the series of publications of the full versions of articles from the magazine Hacker. Spelling and punctuation of the author saved.


    About docker


    About microservice architecture heard almost everything. The very concept of breaking the application into parts is not to say that a new one. But the new is well forgotten and reworked old.


    If you try to talk about architecture in a few words, the web application is divided into separate unitary parts - services. Services do not interact with each other directly and do not have common databases. This is done to be able to change each service without consequences for others. Services are packaged in containers. Among the containers rules Docker Ball.


    In order to describe what Docker is, it is very often simplistic to use the term “virtual machine”. There is definitely a similarity, but to say so is wrong. The easiest way to understand this difference is to look at the following images from the official documentation of the docker:




    Containers use the core of the current operating system and divide it among themselves. While virtual machines using hypervisor use hardware resources.
    The image / image of the docker is a read-only object, which, in essence, stores the template for building the container. The container is the environment in which the code is executed. Images are stored in repositories. For example, the official Docker Hub repository allows you to store only one image in private. However, it's free, so even for that you need to thank them.


    INFO


    Docker is not the only representative of containerization. In addition to it, there are other technologies. For example:


    rkt (pronounced 'rocket') from CoreOS


    LXD (pronounced 'leksdi') from Ubuntu


    Windows Containers - never guess from anyone.


    Now that we are familiar with the theory, let's move on to practice.


    Installing the docker does not make much sense to disassemble, because it can be installed on many operating systems. I will only indicate that you can download it for your platform from the Docker Store . If you install Docker under Windows, it is necessary that virtualization is enabled in the BIOS and in the OS. You can read about how to enable it in 10-ke in the following article: Installing Hyper-V in Windows10


    Creating a project with docker support


    Docker is, of course, a Linux-based product, but if necessary you can use it when developing for Mac or for Windows. When creating a project in Visual Studio, to add docker support, simply tick the Enable Docker Support checkbox.


    Docker support can be added to an existing project. It is added to the project in the same way as various new components are added. Context Menu Add - Docker Support.


    If a docker is installed and running on your machine, the console will be automatically opened and the command


    docker pull microsoft/aspnetcore:2.0

    which starts the process of downloading the image. This image is actually a blank on the basis of which your image will be created. ASP.NET Core 2.1 uses a different image - microsoft / dotnet: sdk


    The following files will be automatically created for you in the solution directory:
    .dockerignore (excluding files and directories from the docker image), docker-compose.yml (using this file you can configure the execution of several services), docker-compose.override.yml (auxiliary docker-compose configuration), docker-compose.dcproj (project file for Visual Studio).


    A Dockerfile file will be created in the project directory. Actually, with the help of this file we create our own image. By default (in case the project is called DockerServiceDemo) it may look something like this:


    FROM microsoft/aspnetcore:2.0 AS base
    WORKDIR /app
    EXPOSE 80
    FROM microsoft/aspnetcore-build:2.0 AS build
    WORKDIR /src
    COPY DockerServiceDemo/DockerServiceDemo.csproj DockerServiceDemo/
    RUN dotnet restore DockerServiceDemo/DockerServiceDemo.csproj
    COPY . .
    WORKDIR /src/DockerServiceDemo
    RUN dotnet build DockerServiceDemo.csproj -c Release -o /app
    FROM build AS publish
    RUN dotnet publish DockerServiceDemo.csproj -c Release -o /app
    FROM base AS final
    WORKDIR /app
    COPY --from=publish /app .
    ENTRYPOINT ["dotnet", "DockerServiceDemo.dll"]

    The initial configuration for .NET Core 2.0 will not allow you to immediately build an image using the docker build command. It is configured to run the docker-compose file from a higher directory. In order to build successfully, Dockerfile can be brought to a similar form:


    FROM microsoft/aspnetcore:2.0 AS base
    WORKDIR /app
    EXPOSE 80
    FROM microsoft/aspnetcore-build:2.0 AS build
    WORKDIR /src
    COPY DockerServiceDemo.csproj DockerServiceDemo.csproj
    RUN dotnet restore DockerServiceDemo.csproj
    COPY . .
    WORKDIR /src
    RUN dotnet build DockerServiceDemo.csproj -c Release -o /app
    FROM build AS publish
    RUN dotnet publish DockerServiceDemo.csproj -c Release -o /app
    FROM base AS final
    WORKDIR /app
    COPY --from=publish /app .
    ENTRYPOINT ["dotnet", "DockerServiceDemo.dll"]

    All I did was remove the extra DockerServiceDemo directory.


    If you are using Visual Studio Code, then you have to generate the files manually. Although VS Code has auxiliary functionality in the form of the Docker extension. I will add a link to the manual on how to work with the VS Code - Working with Docker . Yes, the article is in English, but it’s with pictures.


    "Three chords" docker


    For daily work with the docker, just remember a few commands.


    The most important team is, of course, building an image. In order to do this, you need to use bash / CMD / PowerShell to go to the directory where the Dockerfile is located and execute the command:


    docker build -t your_image_name . 

    Here, after the -t parameter, the name of your image is specified. Attention - at the end of the command after the space point. This dot means that the current directory is used. You can mark an image with any tag (number or name). To do this, after the name put a colon and specify the tag. If the tag is not specified, then by default it will be set with the name latest. To send an image to the repository, it is necessary that the name of the image includes the name of the repository. Like that:


    docker build -t docker_account_name/image_name:your_tag . 

    Here your_docker_account_name is the name of your docker hub account.


    If you have created an image with a local name that does not include a repository, then you can mark the image with another name after building using the following command:


    docker tag image_name docker_account_name/image_name:your_tag

    In order to send changes to the hub, you must now run the following command:


    docker push docker_account_name/image_name:your_tag

    Before this, you must log into your docker account. On Windows, this is done from the UI of the application, but on * nix, this is done by the command:


    docker login

    In fact, three teams are not enough. You must also be able to check the operation of the container. The command to start the container looks like this:


    docker run -it -p 5000:80 image_name

    The -it option will create a pseudo-TTY and your container will respond to requests. After running the command, the service will be available at http: // localhost: 5000 /


    -p 5000: 80 connects port 5000 of the container with port 80 of the host.


    In addition, there are such commands:


    docker ps –a

    Show you a list of containers. Since the -a switch has been added, all containers will be displayed, not just those that are currently running.


    docker rm container_name

    This command will remove the container named container_name. rm - short for remove


    docker logs container_name

    Display container logs


    docker rmi image_name

    Delete image named image_name


    Running a container through a reverse proxy server


    The fact is that the .NET Core applications themselves use their Kestrel web server. This server is not recommended for production. Why? There are several explanations.
    If there are several applications that share the IP and the port, then Kestrel will not be able to distribute traffic. In addition, the reverse proxy server provides an extra layer of security, simplifies load balancing and SSL configuration, and also integrates better into existing infrastructure. For most developers, the most important reason for the need for reverse proxy is additional security.


    To begin, restore the initial Dockerfile configuration. And then we will deal with the docker-compose.yml file and try to run our service alone. The format of the yml file is read like “jumble” and is an abbreviation either from “Yet Another Markup Language”, or from “YAML Ain't Markup Language”. Either another markup language, or not a markup language at all. Somehow all is not certain.


    My default docker-compose file looks like this:


    version: '3.4'
    services:
      dockerservicedemo:
        image: ${DOCKER_REGISTRY}dockerservicedemo
        build:
          context: .
          dockerfile: DockerServiceDemo/Dockerfile

    The docker-compose.override.yml file adds several settings to the configuration:
    version: '3.4'


    services:
      dockerservicedemo:
        environment:
          - ASPNETCORE_ENVIRONMENT=Development
        ports:
          - "80"

    We can build the created solution using the docker-compose build, by invoking the docker-compose up command we will launch our container. Everything is working? Then go to the next step. We create the nginx.info file. The configuration will be approximately as follows:


    worker_processes 4;
    events { worker_connections 1024; }
    http {
        sendfile on;
        upstream app_servers {
            server dockerservicedemo:80;
        }
        server {
        listen 80;
        location / {
            proxy_pass         http://app_servers;
            proxy_http_version 1.1;
            proxy_set_header   Upgrade $http_upgrade;
            proxy_set_header   Connection keep-alive;
            proxy_set_header   Host $host;
            proxy_cache_bypass $http_upgrade;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header   X-Forwarded-Proto $scheme;
          }
        }
    }

    Here we specify that nginx will listen on port 80 (listen 80;). And the received requests will be redirected to the 80th port of the host in the container dockerservicedemo. In addition, we indicate to nginx what headers need to be passed on.


    We can use http in nginx, and access the website via https. When an https request passes through an http proxy, then a lot of information from https is not transmitted to http. In addition, when using a proxy, the external IP address is lost. To transfer this information in the headers, you need to change the code of our ASP.NET project and add the following code to the beginning of the Configure method of the Startup.cs file:


     app.UseForwardedHeaders(new ForwardedHeadersOptions {
                    ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
                });

    Most proxy servers use X-Forwarded-For and X-Forwarded-Proto headers. It is these headers that are specified now in the nginx configuration.


    Now we include the nginx image and the nginx.conf file in the doker-compose configuration. Caution in YAML spaces matter:


    version: '3.4'
    services:
      dockerservicedemo:
        image: ${DOCKER_REGISTRY}dockerservicedemo
        build:
          context: .
          dockerfile: DockerServiceDemo/Dockerfile
        ports:
          - 5000:80
      proxy:
        image: nginx:latest
        volumes:
          - ./DockerServiceDemo/nginx.conf:/etc/nginx/nginx.conf
        ports:
          - 80:80

    Here we add to our proxy configuration in the form of a nginx image. To this image we “cling” to the external settings file. We somehow mount it to the file system of the container using a mechanism called volume. If you add to the end: ro then the object will be mounted only for reading.


    The proxy listens to the external 80th port of the machine on which the container is running and sends a request to the internal 80th port of the container.


    Having executed the doker-compose up command, we launch it, that is, extract the nginx image from the repository and launch our container together with the proxy container. Now at http: // localhost: 80 / it will be accessible through nginx. At the 5000th port, the application is “spinning” under Kestrel.


    We can verify that the request to the web application passes through the reverse proxy. Open in Chrome developer tools browser and go to the Network tab. Here click on localhost and select the Headers tab.



    We start the container through a proxy and HTTPS


    The ASP.NET Core 2.1 version brought with it improvements in HTTPS support.
    Let's say the following middleware allows you to redirect from an unprotected connection to a secure:


    app.UseHttpsRedirection();

    And the following allows you to use HTTP Strict Transport Security Protocol - HSTS.


    app.UseHsts();

    HSTS is a feature from the HTTP / 2 protocol, the specification of which was released in 2015. This functionality is supported by modern browsers and informs that the website uses only https. Thus, a protection against downgrade attacks occurs, during which the attacker can take advantage of the situation by switching to the unprotected http protocol. For example, lower the version of TLS or even replace the certificate.


    As a rule, this type of attack is used in conjunction with man-in-the-middle attacks. It should be known and remembered that HSTS does not save from the situation when the user enters the site using the http protocol and then redirects to https. There is a so-called Chrome preload list , which contains links to websites that support https. Other browsers (Firefox, Opera, Safari, Edge) also support https lists of sites created based on the Chrome list. But in all these lists far from all sites are contained.


    When you first run any Core application on Windows, you will receive a message that a developer certificate has been created and installed. Clicking the button and installing the certificate will thus make it trusted. From the command line on macOS you can add trust to the certificate using the command:
    dotnet dev-certs https –trust


    If the dev-certs utility is not installed, you can install it with the command:


    dotnet tool install --global dotnet-dev-certs 

    How to add a certificate to trusted on Linux depends on the distribution.
    For test purposes, we use the certificate of the developer. Actions with a certificate signed by a CA are similar. If you wish, you can use free LetsEncrypt certificates .


    You can export the developer certificate to a file using the command


    dotnet dev-certs https -ep путь_к_создаваемому_файлу.pfx

    The file must be copied to the% APPDATA% / ASP.NET / Https / directory under Windows or into /root/.aspnet/https/ under macOS / Linux.


    In order for the container to pick up the path to the certificate and its password, you need to create User secrets with the following contents:


    {
        "Kestrel":{
            "Certificates":{
                "Default":{
                    "Path":     "/root/.aspnet/https/имя_вашего_сертификата.pfx",
                    "Password": "пароль_от_вашего_сертификата"
                }
            }
        }
    }

    This file stores unencrypted data and is therefore only used during development. The file is created in Visual Studio by calling the context menu on the project icon or using the user-secrets utility on Linux.


    On Windows, the file will be saved in the% APPDATA% \ Microsoft \ UserSecrets \ <user_secrets_id> \ secrets.json directory, and on macOS and Linux it will be saved in ~ / .microsoft / usersecrets / <user_secrets_id> /secrets.json


    To save the settings for production, some Linux distributions may use systemd. The settings are saved under the Service attribute. For example:


    [Service]
    Environment="Kestrel _ Certificates _ Default _Path=/root/.aspnet/https/имя_вашего_сертификата.pfx"
    Environment="Kestrel _ Certificates _ Default _Password=пароль_от_вашего_сертификата"

    Next, I will give and sort out immediately the working version of the configuration of the docker for the proxy and the container via https.


    Docker-compose file:


    version: '3.4'
    services:
      dockerservicedemo21:
        image: ${DOCKER_REGISTRY}dockerservicedemo
        build:
          context: .
          dockerfile: DockerServiceDemo/Dockerfile
    Файл override:
    version: '3.4'
    services:
      dockerservicedemo:
        environment:
          - ASPNETCORE_ENVIRONMENT=Development
          - ASPNETCORE_URLS=https://+:44392;http://+:80
          - ASPNETCORE_HTTPS_PORT=44392
        ports:
          - "59404:80"
          - "44392:44392"
        volumes:
          - ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
          - ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
      proxy:
        image: nginx:latest
        volumes:
          - ./DockerServiceDemo/nginx.conf:/etc/nginx/nginx.conf
          - ./DockerServiceDemo/cert.crt:/etc/nginx/cert.crt
          - ./DockerServiceDemo/cert.rsa:/etc/nginx/cert.rsa
        ports:
          - "5001:44392"

    Now I will describe incomprehensible moments. ASPNETCORE_URLS allows us not to indicate in the application code using app.UseUrl the port listened by the application.


    ASPNETCORE_HTTPS_PORT makes a redirect similar to what the following code would do:
    services.AddHttpsRedirection (options => options.HttpsPort = 44392)


    That is, traffic from http requests will be redirected to a specific port https requests.
    With the help of ports it is indicated that the request from the external 59404th port will be redirected to the 80th container, and from the 44392th external port to the 44392nd. Theoretically, once we have a reverse proxy server configured, we can remove ports with these redirections.
    Using volumes, you can mount the directory with the pfx certificate and the UserSecrets application with password and a link to the certificate.


    The proxy section indicates that requests from the 5001th external port will be redirected to the 44392nd nginx port. In addition, the file with the nginx configuration is mounted, as well as the certificate and key to the certificate.


    In order for their single pfx format certificate (which we already have) to create crt and rsa files, you can use OpenSSL. First you need to extract the certificate:


    openssl pkcs12 -in ./ваш_сертификат.pfx -clcerts -nokeys -out domain.crt

    And then the private key:


    openssl pkcs12 -in ./ваш_сертификат.pfx -nocerts -nodes -out domain.rsa

    The nginx configuration is as follows:


    worker_processes 4;
    events { worker_connections 1024; }
    http {
        sendfile on;
        upstream app_servers {
            server dockerservicedemo:44392;
        }
        server {
        listen 44392 ssl;
        ssl_certificate /etc/nginx/cert.crt;
        ssl_certificate_key /etc/nginx/cert.rsa;
        location / {
            proxy_pass         https://app_servers;
            proxy_set_header   Upgrade $http_upgrade;
            proxy_set_header   Connection keep-alive;
            proxy_set_header   Host $host;
            proxy_cache_bypass $http_upgrade;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header   X-Forwarded-Proto $scheme;
          }
        }
    }

    The proxy server listens on port 44392. This port receives requests from the 5001st port of the host. Next, the proxy forwards requests to port 44392 of the dockerdemoservice container.


    Having dealt with these examples, you will get a good background for working with docker, microservices and nginx.


    We remind you that this is the full version of an article from Hacker magazine . Its author is Alexey Sommer .


    Also popular now: