Implementing a secure NAS software platform


    The previous article described the design of a NAS software platform.
    It is time to implement it.


    Check


    Be sure, before starting to check the performance of the pool:


    zpool status -v

    The pool and all disks in it must be ONLINE.


    Further, I assume that at the previous stage everything was done according to the instructions , and it works, or you yourself understand what you are doing.


    Facilities


    First of all, you should take care of convenient management, if you have not done so from the very beginning.
    It will take:


    • SSH-Server: apt-get install openssh-server. If you do not know how to configure SSH,making NAS on Linux is too earlyYou can read the features of its use in this article , then use one of the manuals .
    • tmux or screen : apt-get install tmux. To save a session at logins over SSH and use multiple windows.

    After installing SSH, you need to add a user in order not to log in via SSH as root (login is disabled by default and you do not need to enable it):


    zfs create rpool/home/user
    adduser user
    cp -a /etc/skel/.[!.]* /home/user
    chown -R user:user /home/user

    For remote administration it is a sufficient minimum.


    However, for now, you need to keep the keyboard and monitor connected, as you still need to reboot when upgrading the kernel and in order to make sure that everything works right after the download.


    An alternative is to use Virtual KVM, which is provided by IME . There is a console there, though in my case it is implemented as a Java applet, which is not very convenient.


    Customization


    Cache preparation


    As far as you remember, in the configuration I described there is a separate SSD under L2ARC, which is not yet used, but taken "for growth".


    It is optional, but it is desirable to fill this SSD with random data (in the case of the Samsung EVO, it will still be filled with zeros after running the blkdiscard, but not on all SSDs like this):


    dd if=/dev/urandom of=/dev/disk/by-id/ata-Samsung_SSD_850_EVO bs=4M && blkdiscard /dev/disk/by-id/ata-Samsung_SSD_850_EVO

    Disable logging compression


    On ZFS, compression is used as it is, so compression of logs via gzip will obviously be superfluous.
    I turn off:


    for file in /etc/logrotate.d/* ; doif grep -Eq "(^|[^#y])compress""$file" ; then
        sed -i -r "s/(^|[^#y])(compress)/\1#\2/""$file"fidone

    System update


    Everything is simple:


    apt-get dist-upgrade --yes
    reboot

    Creating a snapshot for the new state


    After a reboot, in order to fix a new working state, you need to rewrite the first snapshot:


    zfs destroy rpool/ROOT/debian@install
    zfs snapshot rpool/ROOT/debian@install

    File System Organization


    Preparing sections for SLOG


    The first thing to do in order to achieve normal ZFS performance is to render the SLOG to the SSD.
    Let me remind you that the SLOG in the used configuration is duplicated on two SSDs: devices for LUKS-XTS will be created for it over the 4th partition of each SSD:


    dd if=/dev/urandom of=/etc/keys/slog.key bs=1 count=4096
    cryptsetup --verbose --cipher "aes-xts-plain64:sha512" --key-size 512 --key-file /etc/keys/slog.key luksFormat /dev/disk/by-id/ata-Samsung_SSD_850_PRO-part4
    cryptsetup --verbose --cipher "aes-xts-plain64:sha512" --key-size 512 --key-file /etc/keys/slog.key luksFormat /dev/disk/by-id/ata-Micron_1100-part4
    echo"slog0_crypt1 /dev/disk/by-id/ata-Samsung_SSD_850_PRO-part4 /etc/keys/slog.key luks,discard" >> /etc/crypttab
    echo"slog0_crypt2 /dev/disk/by-id/ata-Micron_1100-part4 /etc/keys/slog.key luks,discard" >> /etc/crypttab

    Preparing partitions for L2ARC and paging


    First you need to create partitions for swap and l2arc:


    sgdisk -n1:0:48G -t1:8200 -c1:part_swap -n2::196G -t2:8200 -c2:part_l2arc /dev/disk/by-id/ata-Samsung_SSD_850_EVO

    The swap partition and L2ARC will be encrypted in a random key, since after reboot, they are not required and they can always be re-created.
    Therefore, a line is written into crypttab to encrypt / decrypt partitions in plain mode:


    echo swap_crypt /dev/disk/by-id/ata-Samsung_SSD_850_EVO-part1 /dev/urandom swap,cipher=aes-xts-plain64:sha512,size=512 >> /etc/crypttab
    echo l2arc_crypt /dev/disk/by-id/ata-Samsung_SSD_850_EVO-part2 /dev/urandom cipher=aes-xts-plain64:sha512,size=512 >> /etc/crypttab

    Then you need to restart the daemons and enable paging:


    echo'vm.swappiness = 10' >> /etc/sysctl.conf
    sysctl vm.swappiness=10
    systemctl daemon-reload
    systemctl start systemd-cryptsetup@swap_crypt.service
    echo /dev/mapper/swap_crypt none swap sw,discard 0 0 >> /etc/fstab
    swapon -av

    Because active use of paging on SSD is not planned, the parameter swapiness, which is the default 60, must be set to 10.


    L2ARC is not yet used at this stage, but the section for it is already ready:


    $ ls /dev/mapper/
    control  l2arc_crypt root_crypt1  root_crypt2  slog0_crypt1  slog0_crypt2  swap_crypt  tank0_crypt0  tank0_crypt1  tank0_crypt2  tank0_crypt3

    Pula tankN


    The creation of a pool will be described tank0, tank1created by analogy.


    In order not to engage in the creation of identical partitions manually and to avoid mistakes, I wrote a script to create encrypted partitions for pools:


    create_crypt_pool.sh
    #!/bin/bash
    
    KEY_SIZE=512
    POOL_NAME="$1"
    KEY_FILE="/etc/keys/${POOL_NAME}.key"
    LUKS_PARAMS="--verbose --cipher aes-xts-plain64:sha${KEY_SIZE} --key-size $KEY_SIZE"
    [ -z "$1" ] && { echo"Error: pool name empty!" ; exit 1; }
    shift
    [ -z "$*" ] && { echo"Error: devices list empty!" ; exit 1; }
    echo"Devices: $*"read -p "Is it ok? " a
    [ "$a" != "y" ] && { echo"Bye"; exit 1; }
    dd if=/dev/urandom of=$KEY_FILE bs=1 count=4096
    phrase="?"read -s -p "Password: " phrase
    echoread -s -p "Repeat password: " phrase1
    echo
    [ "$phrase" != "$phrase1" ] && { echo"Error: passwords is not equal!" ; exit 1; }
    echo"### $POOL_NAME" >> /etc/crypttab
    index=0
    for i in $*; doecho"$phrase"|cryptsetup $LUKS_PARAMS luksFormat "$i" || exit 1
      echo"$phrase"|cryptsetup luksAddKey "$i"$KEY_FILE || exit 1
      dev_name="${POOL_NAME}_crypt${index}"echo"${dev_name}$i$KEY_FILE luks" >> /etc/crypttab
      cryptsetup luksOpen --key-file $KEY_FILE"$i""$dev_name" || exit 1
      index=$((index + 1))
    doneecho"###" >> /etc/crypttab
    phrase="====================================================="
    phrase1="================================================="unset phrase
    unset phrase1

    Now, using this script, you need to create a pool for storing data:


    ./create_crypt_pool.sh
    zpool create -o ashift=12 -O atime=off -O compression=lz4 -O normalization=formD  tank0 raidz1 /dev/disk/by-id/dm-name-tank0_crypt*

    Notes on the parameter ashift=12see in my previous articles and comments to them.


    After creating the pool, I put his log on the SSD:


    zpool add tank0 log mirror /dev/disk/by-id/dm-name-slog0_crypt1 /dev/disk/by-id/dm-name-slog0_crypt2

    Later, when OMV is installed and configured, it will be possible to create pools through the GUI:


    The creation of ZFS fell in OMV WEB GUI


    Enable pool import and automount volumes on boot.


    In order to ensure that pool automapping is enabled, run the following commands:


    rm /etc/zfs/zpool.cache
    systemctl enable zfs-import-scan.service
    systemctl enable zfs-mount.service
    systemctl enable zfs-import-cache.service

    At this stage, the configuration of the disk subsystem is completed.


    operating system


    The first step is to install and configure the OMV to finally get some kind of basis for the NAS.


    OMV installation


    OMV will be installed as a deb package. In order to do this, it is possible to use the official instructions .


    The script add_repo.shadds the OMV Arrakis repository /etc/apt/ sources.list.dto the batch system to see the repository.


    add_repo.sh
    cat <<EOF >> /etc/apt/sources.list.d/openmediavault.list
    deb http://packages.openmediavault.org/public arrakis main
    # deb http://downloads.sourceforge.net/project/openmediavault/packages arrakis main## Uncomment the following line to add software from the proposed repository.# deb http://packages.openmediavault.org/public arrakis-proposed main# deb http://downloads.sourceforge.net/project/openmediavault/packages arrakis-proposed main## This software is not part of OpenMediaVault, but is offered by third-party## developers as a service to OpenMediaVault users.
    deb http://packages.openmediavault.org/public arrakis partner
    # deb http://downloads.sourceforge.net/project/openmediavault/packages arrakis partner
    EOF

    Please note that compared to the original, the partner repository is enabled.


    To install and initialize initialization, follow the commands below.


    Commands for installing OMV.
    ./add_repo.sh
    export LANG=C
    export DEBIAN_FRONTEND=noninteractive
    export APT_LISTCHANGES_FRONTEND=none
    apt-get update
    apt-get --allow-unauthenticated install openmediavault-keyring
    apt-get update
    apt-get --yes --auto-remove --show-upgraded \
        --allow-downgrades --allow-change-held-packages \
        --no-install-recommends \
        --option Dpkg::Options::="--force-confdef" \
        --option DPkg::Options::="--force-confold" \
        install postfix openmediavault
    # Initialize the system and database.
    omv-initsystem

    OMV is installed. It uses its core, and after installation it may require a reboot.


    Having rebooted, the OpenMediaVault interface will be available on port 80 (go to the NAS browser on the IP address):



    Login / default password admin/openmediavault.


    OMV Setup


    Further, most of the settings will go through the WEB-GUI.


    Establishing a secure connection


    Now we need to change the password of the WEB-administrator and generate a certificate for the NAS in order to work further over HTTPS.


    Password change is performed on the tab "System-> General Settings-> Web Administrator Password . "
    To generate a certificate on the tab "System-> Certificates-> SSL", you must select "Add-> Create . "


    The created certificate will be visible on the same tab:


    Certificate


    After creating the certificate, on the System-> General Settings tab, you must enable the Enable SSL / TLS checkbox .


    You will need a certificate before completing the configuration. In the final version, a signed certificate will be used to access OMV.


    Now you need to re-login to OMV, to port 443, or simply by assigning a prefix https://in front of IP in the browser .


    If you are able to log in, on the "System-> General Settings" tab, you must enable the "Force SSL / TLS" checkbox.


    Change ports 80 and 443 to 10080 and 10443 .
    And try to enter the following address: https://IP_NAS:10443.
    Changing ports is important because ports 80 and 443 will use the docker container with nginx-reverse-proxy.


    Primary settings


    The minimum settings that must be done first:


    • On the System-> Date and Time tab, check the time zone value and set the NTP server.
    • On the System-> Monitoring tab, enable the collection of performance statistics.
    • On the System-> Power Management tab, you should probably turn off Monitoring so that OMV does not try to control the fans.

    Network


    If the second NAS network interface has not yet been connected, connect it to the router.


    Then:


    • On the System-> Network tab, set the host name to "nas" (or whatever you like).
    • Configure the bonding for the interfaces as shown in the figure below: "System-> Network-> Interfaces-> Add-> Bond" .
    • Add the desired firewall rules in the System-> Network-> Firewall tab . To get started, access to ports 10443, 10080, 443, 80, 22 for SSH and permission to receive / send ICMP is enough.

    Bonding Setup


    As a result, there should appear interfaces in bonding, which the router will see as one interface and assign it one IP address:


    Interfaces in Bonding


    If desired, it is possible to further configure SSH from the WEB GUI:


    SSH setup


    Repositories and modules


    On the System-> Update Management-> Settings tab, turn on Community Supported Updates .


    First you need to add OMV extras repositories .
    This can be done simply by installing the plugin or package, as indicated on the forum .


    On the page "System-> Plugins" you need to find the plugin "openmediavault-omvextrasorg" and install it.


    As a result, the "OMV-Extras" icon will appear in the system menu (it can be seen on screenshots).


    Go there and enable the following repositories:


    • OMV-Extras.org. Stable repository containing many plugins.
    • OMV-Extras.org Testing. Some plugins from this repository are missing in the stable repository.
    • Docker CE. Actually, Docker.

    On the tab "System-> OMV Extras-> Kernel" you can choose the kernel you need, including the core from Proxmox (I did not install it myself, because I do not need it yet, therefore I do not recommend it):



    Install the necessary plug-ins ( in bold absolutely necessary, in italics - optional, which I did not install):


    List of plugins.
    • openmediavault-apttool. Minimum GUI for working with the batch system. Adds "Services-> Apttool" .
    • openmediavault-anacron. Adds the ability to work from a GUI with an asynchronous scheduler. Adds "System-> Anacron" .
    • openmediavault-backup. Provides backup system in storage. Adds the page "System-> Backup" .
    • openmediavault-diskstats. Needed to collect statistics on disk performance.
    • openmediavault-dnsmasq. Позволяет поднять на NAS сервер DNS и DHCP. Т.к., я делаю это на роутере, мне не требуется.
    • openmediavault-docker-gui. Интерфейс управления Docker контейнерами. Добавляет "Сервисы->Docker".
    • openmediavault-ldap. Поддержка аутентификации через LDAP. Добавляет "Управление правами доступа->Служба каталогов".
    • openmediavault-letsencrypt. Поддержка Let's Encrypt из GUI. Не нужна, потому что используется встраивание в контейнер nginx-reverse-proxy.
    • openmediavault-luksencryption. Поддержка шифрования LUKS. Нужен, чтобы в интерфейсе OMV были видны шифрованные диски. Добавляет "Хранилище->Шифрование".
    • openmediavault-nut. Поддержка ИБП. Добавляет "Сервисы->ИБП".
    • openmediavault-omvextrasorg. OMV Extras уже должен быть установлен.
    • openmediavault-resetperms. Позволяет переустанавливать права и сбрасывать списки контроля доступа на общих каталогах. Добавляет "Управление правами доступа->Общие каталоги->Reset Permissions".
    • openmediavault-route. Полезный плагин для управления маршрутизацией. Добавляет "Система->Сеть->Статический маршрут".
    • openmediavault-symlinks. Предоставляет возможность создавать символические ссылки. Добавляет страницу "Сервисы->Symlinks".
    • openmediavault-unionfilesystems. Поддержка UnionFS. Может пригодиться в будущем, хотя докер и использует ZFS в качестве бэкэнда. Добавляет "Хранилище->Union Filesystems".
    • openmediavault-virtualbox. Может быть использован для встраивания в GUI возможности управления виртуальными машинами.
    • openmediavault-zfs . The plugin adds support for ZFS in OpenMediaVault. After installation, the "Storage-> ZFS" page appears .

    Discs


    All discs in the system must be visible to OMV. Verify this by looking at the "Storage-> Disks" tab . If not all drives are visible, run the scan.


    Disks in the system


    In the same place, on all HDDs, you need to enable recording caching (by clicking on the disk from the list and pressing the "Edit" button).


    Make sure that all encrypted partitions are visible on the "Storage-> Encryption" tab :


    Encrypted partitions


    Now it's time to configure SMART, specified as a means of improving reliability:


    • Go to the "Storage-> SMART-> Settings" tab . Turn on SMART.
    • There you can also select the values ​​of the temperature levels of the disks (critical, usually 60 C, and the optimum temperature of the disk 15-45 C).
    • Go to the "Storage-> SMART-> Devices" tab . Enable monitoring for each drive.
    • Go to the "Storage-> SMART-> Scheduled Tests" tab . Add a short self-test once a day for each disc and a long self-test once a month. And so that the periods of self-checking do not overlap.

    At this disk configuration may be considered over.


    File Systems and Shared Directories


    It is necessary to create file systems for predefined directories.
    It is possible to do this from the console, or from the WEB-interface OMV ( Storage-> ZFS-> Select pool tank0-> Button "Add" -> Filesystem ).


    Commands to create a file system.
    zfs create -o utf8only=on -o normalization=formD -p tank0/user_data/books
    zfs create -o utf8only=on -o normalization=formD -p tank0/user_data/music
    zfs create -o utf8only=on -o normalization=formD -p tank0/user_data/pictures
    zfs create -o utf8only=on -o normalization=formD -p tank0/user_data/downloads
    zfs create -o compression=off -o utf8only=on -o normalization=formD -p tank0/user_data/videos

    The result should be the following directory structure:



    After that, add the created filesystems as shared directories on the page "Manage access rights-> Shared directories-> Add" .
    Please note that the "Device" parameter is equal to the path to the file system created in ZFS, and the "Path" parameter for all directories is "/".



    Backup


    Backup is done with two tools:



    If you use the plugin, you will most likely get an error:


    lsblk: /dev/block/0:22: not a block device

    In order to fix it, as noted by OMV developers in this "very non-standard configuration", it would be possible to abandon the plug-in and use the ZFS tools in the form zfs send/receive.
    Or explicitly specify the parameter "Root device" in the form of a physical device from which to boot.
    It’s more convenient for me to use the plugin and back up the OS from the interface, rather than building my own with zfs send, so I prefer the second option.


    Backup Setup


    To make the backup work, first create a file system via ZFS tank0/apps/backup, then in the "System-> Backup" menu, click "+" in the "Shared folder" field and add the created device as the target, and set the "Path" field to "/" .


    There are problems with zfs-auto-snapshot too. If it is not configured, it will take pictures every hour, every day, every week, every month for a year.
    The result is that in the screenshot:


    Lots of spam from zfs-auto-snapshot


    If you have already encountered this, run the following code to delete automatic snapshots:


    zfs list -t snapshot -o name -S creation | grep "@zfs-auto-snap" | tail -n +1500 | xargs -n 1 zfs destroy -vr

    Then configure the launch of the zfs-auto-snapshot in cron.
    To get started, just delete /etc/cron.hourly/zfs-auto-snapshotif you do not need to take pictures every hour.


    E-mail notifications


    Notification by e-mail was indicated as one of the means to achieve reliability.
    Because now you need to configure E-mail notifications.
    To do this, register a box on one of the public servers (well, set up your own SMTP server, if you really have reasons to do so).


    After that, go to the "System-> Notification" page and enter:


    • SMTP server address.
    • SMTP server port.
    • Username.
    • The address of the sender (usually the first component of the address is the same as the name).
    • User Password.
    • In the "Recipient" field, your usual address to which the NAS will send notifications.

    It is highly desirable to enable SSL / TLS.


    An example of setting for Yandex is shown in the screenshot:


    E-mail notifications


    Network configuration outside NAS


    IP address


    I use a white static IP-address, which costs plus 100 rubles per month. If there is no desire to pay and your address is dynamic, but not for NAT, it is possible to adjust external DNS records through the API of the selected service.
    However, it is worth bearing in mind that an address not behind a NAT can suddenly become an address behind a NAT: as a rule, providers do not give any guarantees.


    Router


    As a router I have a Mikrotik RouterBoard , similar to the one in the picture below.


    Mikrotik Routerboard


    On the router, you need to do three things:


    • Configure static addresses for the NAS. In my case, addresses are issued via DHCP, and you need to ensure that adapters with a specific MAC address always get the same IP address. In RouterOS, this is done on the "IP-> DHCP Server" tab with the "Make static" button .
    • Configure the DNS server so that it for the name "nas", as well as names ending in ".nas" and ".NAS.cloudns.cc" (where "NAS" is a zone on ClouDNS or similar service) gave the IP system. Where to do this in RouterOS is shown in the screenshot below. In my case, this is implemented by matching the name with a regular expression: " ^.*\.nas$|^nas$|^.*\.NAS.cloudns.cc$"
    • Configure port forwarding. In RouterOS this is done on the tab "IP-> Firewall" , I will not dwell on this further.

    Configuring DNS in RouterOS


    Cloudns


    With CLouDNS, it's simple. Get an account, confirm. NS records you will have already registered. Next, a minimal setup is required.


    First, you need to create the necessary zones (the zone with the name NAS, highlighted in red in the screenshot, is what you need to create, with a different name, of course).


    Creating a zone in ClouDNS


    Secondly, in this zone you must register the following A-records :


    • nas , www , omv , control and empty name . To access the OMV interface.
    • the ldap . Interface phpLdapAdmin.
    • ssp . Interface for changing user passwords.
    • test . Test server.

    The remaining domain names will be added as services are added.
    Click on the zone, then "Add new record" , select the A-type, enter the zone name and the IP address of the router, behind which stands the NAS.


    Added A-records


    Secondly, you need to access the API. In ClouDNS it is paid, so you must first pay for it. In other services it is free. If you know what is better, and this is supported by Lexicon , please write in the comments.


    Having access to the API, there must be added a new user API.


    Adding a ClouDNS API User


    In the "IP address" field you need to enter the IP of the router: this is the address from which the API will be available. After the user is added, you can use the API by logging in to auth-id and auth-password . They will need to be passed to Lexicon as parameters.



    This completes the ClouDNS setup.


    Containerization setup


    Docker setup


    If you installed the openmediavault-docker-gui plugin, the docker-ce package should have already been pulled up by dependencies.


    Additionally, install the docker-compose package , since in the future it will be used to manage containers:


    apt-get install docker-compose

    Also create a file system for service configuration:


    zfs create -p /tank0/docker/services

    All settings, images and docker containers are stored in /var/lib/docker. He writes there intensively (we must remember that this is an SSD), but most importantly, it creates snapshots, clones and file systems with names in the form of hashes.


    Thus, after a while, a lot of garbage will accumulate there and it will not be particularly convenient
    to deal with it. The example in the screenshot.



    To avoid this, it is necessary to localize the data directory on a separate file system.
    Changing the location of the base path of the docker is not difficult, it can be done even through the GUI of the plug-in, but then there will be a problem: the pools will no longer be mounted at boot time , since the docker will create its directories at the mount point, and it will not be empty.


    This problem is solved by replacing the docker's directory /var/libwith a symbolic link:


    service docker stop
    zfs create -o com.sun:auto-snapshot=false -p /tank0/docker/lib
    rm -rf /var/lib/docker
    ln -s  /tank0/docker/lib /var/lib/docker
    service docker start

    As a result:


    $ ls -l /var/lib/docker
    lrwxrwxrwx 1 root root 17 Apr  7 12:35 /var/lib/docker -> /tank0/docker/lib

    Now we need to create an inter-container network:


    docker network create docker0

    This completes the initial Docker configuration and it is possible to start creating containers.


    Configuring a container with nginx-reverse-proxy


    After the Docker is configured, it is possible to start implementing the dispatcher.


    You can find all the configuration files here or under spoilers.


    It uses two images: nginx-proxy and letsencrypt-dns .


    I recall that the ports of the OMV interface need to be changed to 10080 and 10443, because the dispatcher will work on ports 80 and 443.


    /tank0/docker/services/nginx-proxy/docker-compose.yml
    version: '2'
    networks:
      docker0:
        external:
          name: docker0
    services:
      nginx-proxy:
        networks:
          - docker0
        restart: always
        image: jwilder/nginx-proxy
        ports:
          - "80:80"
          - "443:443"
        volumes:
          - ./certs:/etc/nginx/certs:ro
          - ./vhost.d:/etc/nginx/vhost.d
          - ./html:/usr/share/nginx/html
          - /var/run/docker.sock:/tmp/docker.sock:ro
          - ./local-config:/etc/nginx/conf.d
          - ./nginx.tmpl:/app/nginx.tmpl
        labels:
          - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
      letsencrypt-dns:
        image: adferrand/letsencrypt-dns
        volumes:
          - ./certs/letsencrypt:/etc/letsencrypt
        environment:
          - "LETSENCRYPT_USER_MAIL=MAIL@MAIL.COM"
          - "LEXICON_PROVIDER=cloudns"
          - "LEXICON_OPTIONS=--delegated NAS.cloudns.cc"
          - "LEXICON_PROVIDER_OPTIONS=--auth-id=CLOUDNS_ID --auth-password=CLOUDNS_PASSWORD"

    In this config, two containers are configured:


    • nginx-reverse-proxy - reverse proxy itself.
    • letsencrypt-dns - ACME client Let's Encrypt.

    To create and run a container with nginx-reverse-proxy, use the jwilder / nginx-proxy image .


    docker0- the inter-container network, which was created earlier, is not controlled by docker-compose.
    nginx-proxy- service reverse proxy, in person. He looks into the docker0 network. At the same time, ports 80 and 443 in the ports section are forwarded to similar ports of the host (this means that the same ports will be opened on the host, and the data from them will be redirected to the ports on the docker0 network that the proxy listens to).
    This parameter restart: alwaysmeans that you need to start this service upon reboot.


    Toma:


    • The external directory is certsdisplayed in /etc/nginx/certs- there are certificates, including certificates obtained from Let's Encrypt. This is the shared directory between the proxy container and the ACME client container.
    • ./vhost.d:/etc/nginx/vhost.d- configuration of individual virtual hosts. Now I do not use.
    • ./html:/usr/share/nginx/html- static content. There is no need to configure anything.
    • /var/run/docker.sockdisplayed in /tmp/docker.sock- a socket for communicating with the Docker daemon on the host. It is necessary for the docker-gen to work inside the original image.
    • ./local-config, displayed in /etc/nginx/conf.d- additional nginx configuration files. Required for tuning parameters, which are below.
    • ./nginx.tmpl, displayed in /app/nginx.tmpl- a template for the nginx configuration file from which docker-gen will create a config.

    The letsencrypt-dns container is created from the adferrand / letsencrypt-dns image . It includes the ACME client mentioned above and Lexicon, to communicate with the DNS zone provider.


    The shared directory is certs/letsencryptdisplayed in the /etc/letsencryptinside of the container.


    For this to work, you need to set up a few more environment variables inside the container:


    • LETSENCRYPT_USER_MAIL=MAIL@MAIL.COM- Let's Encrypt user mail. It is better here to specify the real mail, which will receive all sorts of messages.
    • LEXICON_PROVIDER=cloudns- provider for Lexicon. In my case - cloudns.
    • LEXICON_PROVIDER_OPTIONS=--auth-id=CLOUDNS_ID --auth-password=CLOUDNS_PASSWORD --delegated=NAS.cloudns.cc- CLOUDNS_ID in the last screenshot in the configuration section of ClouDNS is underlined in red. CLOUDNS_PASSWORD is the password that you set to use the API. NAS.cloudns.cc, where NAS is the name of your DNS zone. For cloudns, this is because the first two components of the domain (cloudns.cc) will be transmitted by default, and the ClouDNS API requires you to specify a zone in the request.

    After this configuration, there will be two independently working containers: a proxy and an agent for obtaining a certificate.
    At the same time, the proxy will look for a certificate in the directories specified in the config, but not in the directory structure that the Let's encrypt agent will create:


    $ ls ./certs/letsencrypt/
    accounts  archive  csr  domains.conf  keys  live  renewal  renewal-hooks

    In order for the proxy to begin to see the received certificates, it is necessary to correct the template slightly.


    /tank0/docker/services/nginx-proxy/nginx.tmpl
    {{ $CurrentContainer := where $ "ID" .Docker.CurrentContainerID | first }}
    {{ define "upstream" }}
        {{ if .Address }}
            {{/* If we got the containers from swarm and this container's port is published to host, use host IP:PORT */}}
            {{ ifand .Container.Node.ID .Address.HostPort }}
                # {{ .Container.Node.Name }}/{{ .Container.Name }}
                server {{ .Container.Node.Address.IP }}:{{ .Address.HostPort }};
            {{/* If there is no swarm node or the port is not published on host, use container's IP:PORT */}}
            {{ elseif .Network }}
                # {{ .Container.Name }}
                server {{ .Network.IP }}:{{ .Address.Port }};
            {{ end }}
        {{ elseif .Network }}
            # {{ .Container.Name }}
            {{ if .Network.IP }}
                server {{ .Network.IP }} down;
            {{ else }}
                server 127.0.0.1 down;
            {{ end }}
        {{ end }}
    {{ end }}
    # If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the# scheme used to connect to this server
    map $http_x_forwarded_proto $proxy_x_forwarded_proto {
      default $http_x_forwarded_proto;
      ''      $scheme;
    }
    # If we receive X-Forwarded-Port, pass it through; otherwise, pass along the# server port the client connected to
    map $http_x_forwarded_port $proxy_x_forwarded_port {
      default $http_x_forwarded_port;
      ''      $server_port;
    }
    # If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any# Connection header that may have been passed to this server
    map $http_upgrade $proxy_connection {
      default upgrade;
      '' close;
    }
    # Apply fix for very long server names
    server_names_hash_bucket_size 128;
    # Default dhparam
    {{ if (exists "/etc/nginx/dhparam/dhparam.pem") }}
    ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
    {{ end }}
    # Set appropriate X-Forwarded-Ssl header
    map $scheme $proxy_x_forwarded_ssl {
      default off;
      https on;
    }
    gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
    log_format vhost '$host $remote_addr - $remote_user [$time_local] ''"$request" $status $body_bytes_sent ''"$http_referer" "$http_user_agent"';
    access_log off;
    {{ if $.Env.RESOLVERS }}
    resolver {{ $.Env.RESOLVERS }};
    {{ end }}
    {{ if (exists "/etc/nginx/proxy.conf") }}
    include /etc/nginx/proxy.conf;
    {{ else }}
    # HTTP 1.1 support
    proxy_http_version 1.1;
    proxy_buffering off;
    proxy_set_header Host $http_host;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $proxy_connection;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
    proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
    proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
    # Mitigate httpoxy attack (see README for details)
    proxy_set_header Proxy "";
    {{ end }}
    {{ $enable_ipv6 := eq (or ($.Env.ENABLE_IPV6) "") "true" }}
    server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 80;
        {{ if $enable_ipv6 }}
        listen [::]:80;
        {{ end }}
        access_log /var/log/nginx/access.log vhost;
        return503;
    }
    {{ if (and (exists "/etc/nginx/certs/default.crt") (exists "/etc/nginx/certs/default.key")) }}
    server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 443 ssl http2;
        {{ if $enable_ipv6 }}
        listen [::]:443 ssl http2;
        {{ end }}
        access_log /var/log/nginx/access.log vhost;
        return503;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/default.crt;
        ssl_certificate_key /etc/nginx/certs/default.key;
    }
    {{ end }}
    {{ range $host, $containers := groupByMulti $ "Env.VIRTUAL_HOST""," }}
    {{ $host := trim $host }}
    {{ $is_regexp := hasPrefix "~" $host }}
    {{ $upstream_name := when $is_regexp (sha1 $host) $host }}
    # {{ $host }}
    upstream {{ $upstream_name }} {
    {{ range $container := $containers }}
        {{ $addrLen := len $container.Addresses }}
        {{ range $knownNetwork := $CurrentContainer.Networks }}
            {{ range $containerNetwork := $container.Networks }}
                {{ if (and (ne $containerNetwork.Name "ingress") (or (eq $knownNetwork.Name $containerNetwork.Name) (eq $knownNetwork.Name "host"))) }}
                    ## Can be connected with "{{ $containerNetwork.Name }}" network
                    {{/* If only 1 port exposed, use that */}}
                    {{ if eq $addrLen 1 }}
                        {{ $address := index $container.Addresses 0 }}
                        {{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
                    {{/* If more than one port exposed, use the one matching VIRTUAL_PORT env var, falling back to standard web port 80 */}}
                    {{ else }}
                        {{ $port := coalesce $container.Env.VIRTUAL_PORT "80" }}
                        {{ $address := where $container.Addresses "Port" $port | first }}
                        {{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
                    {{ end }}
                {{ else }}
                    # Cannot connect to network of this container
                    server 127.0.0.1 down;
                {{ end }}
            {{ end }}
        {{ end }}
    {{ end }}
    }
    {{ $default_host := or ($.Env.DEFAULT_HOST) "" }}
    {{ $default_server := index (dict $host "" $default_host "default_server") $host }}
    {{/* Get the VIRTUAL_PROTO defined by containers w/ the same vhost, falling back to "http" */}}
    {{ $proto := trim (or (first (groupByKeys $containers "Env.VIRTUAL_PROTO")) "http") }}
    {{/* Get the NETWORK_ACCESS defined by containers w/ the same vhost, falling back to "external" */}}
    {{ $network_tag := or (first (groupByKeys $containers "Env.NETWORK_ACCESS")) "external" }}
    {{/* Get the HTTPS_METHOD defined by containers w/ the same vhost, falling back to "redirect" */}}
    {{ $https_method := or (first (groupByKeys $containers "Env.HTTPS_METHOD")) "redirect" }}
    {{/* Get the SSL_POLICY defined by containers w/ the same vhost, falling back to "Mozilla-Intermediate" */}}
    {{ $ssl_policy := or (first (groupByKeys $containers "Env.SSL_POLICY")) "Mozilla-Intermediate" }}
    {{/* Get the HSTS defined by containers w/ the same vhost, falling back to "max-age=31536000" */}}
    {{ $hsts := or (first (groupByKeys $containers "Env.HSTS")) "max-age=31536000" }}
    {{/* Get the VIRTUAL_ROOT By containers w/ use fastcgi root */}}
    {{ $vhost_root := or (first (groupByKeys $containers "Env.VIRTUAL_ROOT")) "/var/www/public" }}
    {{/* Get the first cert name defined by containers w/ the same vhost */}}
    {{ $certName := (first (groupByKeys $containers "Env.CERT_NAME")) }}
    {{/* Get the best matching cert  by name for the vhost. */}}
    {{ $vhostCert := (closest (dir "/etc/nginx/certs") (printf "%s.crt" $host))}}
    {{/* vhostCert is actually a filename so remove any suffixes since they are added later */}}
    {{ $vhostCert := trimSuffix ".crt" $vhostCert }}
    {{ $vhostCert := trimSuffix ".key" $vhostCert }}
    {{/* Use the cert specified on the container or fallback to the best vhost match */}}
    {{ $cert := (coalesce $certName $vhostCert) }}
    {{ $is_https := (and (ne $https_method "nohttps") (ne $cert "") (or (and (exists (printf "/etc/nginx/certs/letsencrypt/live/%s/fullchain.pem" $cert)) (exists (printf "/etc/nginx/certs/letsencrypt/live/%s/privkey.pem" $cert))) (and (exists (printf "/etc/nginx/certs/%s.crt" $cert)) (exists (printf "/etc/nginx/certs/%s.key" $cert)))) ) }}
    {{ if $is_https }}
    {{ if eq $https_method "redirect" }}
    server {
        server_name {{ $host }};
        listen 80 {{ $default_server }};
        {{ if $enable_ipv6 }}
        listen [::]:80 {{ $default_server }};
        {{ end }}
        access_log /var/log/nginx/access.log vhost;
        return301 https://$host$request_uri;
    }
    {{ end }}
    server {
        server_name {{ $host }};
        listen 443 ssl http2 {{ $default_server }};
        {{ if $enable_ipv6 }}
        listen [::]:443 ssl http2 {{ $default_server }};
        {{ end }}
        access_log /var/log/nginx/access.log vhost;
        {{ if eq $network_tag "internal" }}
        # Only allow traffic from internal clientsinclude /etc/nginx/network_internal.conf;
        {{ end }}
        {{ if eq $ssl_policy "Mozilla-Modern" }}
        ssl_protocols TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
        {{ elseif eq $ssl_policy "Mozilla-Intermediate" }}
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
        {{ elseif eq $ssl_policy "Mozilla-Old" }}
        ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:DES-CBC3-SHA:HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP';
        {{ elseif eq $ssl_policy "AWS-TLS-1-2-2017-01" }}
        ssl_protocols TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES128-SHA256:AES256-GCM-SHA384:AES256-SHA256';
        {{ elseif eq $ssl_policy "AWS-TLS-1-1-2017-01" }}
        ssl_protocols TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA';
        {{ elseif eq $ssl_policy "AWS-2016-08" }}
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA';
        {{ elseif eq $ssl_policy "AWS-2015-05" }}
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DES-CBC3-SHA';
        {{ elseif eq $ssl_policy "AWS-2015-03" }}
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA';
        {{ elseif eq $ssl_policy "AWS-2015-02" }}
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA';
        {{ end }}
        ssl_prefer_server_ciphers on;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
            {{ if (and (exists (printf "/etc/nginx/certs/letsencrypt/live/%s/fullchain.pem" $cert)) (exists (printf "/etc/nginx/certs/letsencrypt/live/%s/privkey.pem" $cert))) }}
        ssl_certificate /etc/nginx/certs/letsencrypt/live/{{ (printf "%s/fullchain.pem" $cert) }};
        ssl_certificate_key /etc/nginx/certs/letsencrypt/live/{{ (printf "%s/privkey.pem" $cert) }};
            {{ elseif (and (exists (printf "/etc/nginx/certs/%s.crt" $cert)) (exists (printf "/etc/nginx/certs/%s.key" $cert))) }}
        ssl_certificate /etc/nginx/certs/{{ (printf "%s.crt" $cert) }};
        ssl_certificate_key /etc/nginx/certs/{{ (printf "%s.key" $cert) }};
        {{ end }}
        {{ if (exists (printf "/etc/nginx/certs/%s.dhparam.pem" $cert)) }}
        ssl_dhparam {{ printf "/etc/nginx/certs/%s.dhparam.pem" $cert }};
        {{ end }}
        {{ if (exists (printf "/etc/nginx/certs/%s.chain.pem" $cert)) }}
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate {{ printf "/etc/nginx/certs/%s.chain.pem" $cert }};
        {{ end }}
        {{ if (and (ne $https_method "noredirect") (ne $hsts "off")) }}
        add_header Strict-Transport-Security "{{ trim $hsts }}" always;
        {{ end }}
        {{ if (exists (printf "/etc/nginx/vhost.d/%s" $host)) }}
        include {{ printf "/etc/nginx/vhost.d/%s" $host }};
        {{ elseif (exists "/etc/nginx/vhost.d/default") }}
        include /etc/nginx/vhost.d/default;
        {{ end }}
        location / {
            {{ if eq $proto "uwsgi" }}
            include uwsgi_params;
            uwsgi_pass {{ trim $proto }}://{{ trim $upstream_name }};
            {{ elseif eq $proto "fastcgi" }}
            root   {{ trim $vhost_root }};
            include fastcgi.conf;
            fastcgi_pass {{ trim $upstream_name }};
            {{ else }}
            proxy_pass {{ trim $proto }}://{{ trim $upstream_name }};
            {{ end }}
            {{ if (exists (printf "/etc/nginx/htpasswd/%s" $host)) }}
            auth_basic  "Restricted {{ $host }}";
            auth_basic_user_file    {{ (printf "/etc/nginx/htpasswd/%s" $host) }};
            {{ end }}
            {{ if (exists (printf "/etc/nginx/vhost.d/%s_location" $host)) }}
            include {{ printf "/etc/nginx/vhost.d/%s_location" $host}};
            {{ elseif (exists "/etc/nginx/vhost.d/default_location") }}
            include /etc/nginx/vhost.d/default_location;
            {{ end }}
        }
    }
    {{ end }}
    {{ ifor (not $is_https) (eq $https_method "noredirect") }}
    server {
        server_name {{ $host }};
        listen 80 {{ $default_server }};
        {{ if $enable_ipv6 }}
        listen [::]:80 {{ $default_server }};
        {{ end }}
        access_log /var/log/nginx/access.log vhost;
        {{ if eq $network_tag "internal" }}
        # Only allow traffic from internal clientsinclude /etc/nginx/network_internal.conf;
        {{ end }}
        {{ if (exists (printf "/etc/nginx/vhost.d/%s" $host)) }}
        include {{ printf "/etc/nginx/vhost.d/%s" $host }};
        {{ elseif (exists "/etc/nginx/vhost.d/default") }}
        include /etc/nginx/vhost.d/default;
        {{ end }}
        location / {
            {{ if eq $proto "uwsgi" }}
            include uwsgi_params;
            uwsgi_pass {{ trim $proto }}://{{ trim $upstream_name }};
            {{ elseif eq $proto "fastcgi" }}
            root   {{ trim $vhost_root }};
            include fastcgi.conf;
            fastcgi_pass {{ trim $upstream_name }};
            {{ else }}
            proxy_pass {{ trim $proto }}://{{ trim $upstream_name }};
            {{ end }}
            {{ if (exists (printf "/etc/nginx/htpasswd/%s" $host)) }}
            auth_basic  "Restricted {{ $host }}";
            auth_basic_user_file    {{ (printf "/etc/nginx/htpasswd/%s" $host) }};
            {{ end }}
            {{ if (exists (printf "/etc/nginx/vhost.d/%s_location" $host)) }}
            include {{ printf "/etc/nginx/vhost.d/%s_location" $host}};
            {{ elseif (exists "/etc/nginx/vhost.d/default_location") }}
            include /etc/nginx/vhost.d/default_location;
            {{ end }}
        }
    }
    {{ if (and (not $is_https) (exists "/etc/nginx/certs/default.crt") (exists "/etc/nginx/certs/default.key")) }}
    server {
        server_name {{ $host }};
        listen 443 ssl http2 {{ $default_server }};
        {{ if $enable_ipv6 }}
        listen [::]:443 ssl http2 {{ $default_server }};
        {{ end }}
        access_log /var/log/nginx/access.log vhost;
        return500;
        ssl_certificate /etc/nginx/certs/default.crt;
        ssl_certificate_key /etc/nginx/certs/default.key;
    }
    {{ end }}
    {{ end }}
    {{ end }}

    It can be seen that by default nginx will look for certificates of type /etc/nginx/certs/%s.crtand /etc/nginx/certs/%s.pem, where% s is the name of the certificate (by default it is the name of the host, but it is possible to change it through variables).


    The agent stores the certificates in the directory structure /etc/nginx/certs/letsencrypt/live/%s/{fullchain.pem, privkey.pem}, and therefore in several places of the template the conditions for such certificate names are added:


    {{
    $is_https :=
    (and
      (ne $https_method "nohttps")
      (ne $cert "")
      (or
        (and
          (exists (printf"/etc/nginx/certs/letsencrypt/live/%s/fullchain.pem" $cert))
          (exists (printf"/etc/nginx/certs/letsencrypt/live/%s/privkey.pem" $cert))
        )
        (and
          (exists (printf"/etc/nginx/certs/%s.crt" $cert))
          (exists (printf"/etc/nginx/certs/%s.key" $cert))
        )
      )
    )
    }}

    Now it remains to indicate to the agent for which domain to issue a certificate in the file domains.conf.


    /tank0/docker/services/nginx-proxy/certs/letsencrypt/domains.conf
    *.NAS.cloudns.ccNAS.cloudns.cc

    And one more little nuance. So that in the future you can upload files of an acceptable size to the cloud, and the proxy does not cut them, set its parameter to client_max_body_sizeat least gigabytes to 20, as shown below.


    /tank0/docker/services/nginx-proxy/local-config/max_upload_size.conf
    client_max_body_size20G;

    Setup is complete, it's time to start the container:


    docker-compose up

    Check the operation (everything downloaded and started), press Ctrl + C and run the container in a mode untethered from the console:


    docker-compose up -d

    Setting up a container with a test server


    The test server is the minimum nginx that should display the welcome page. It is necessary that it can be easily started and stopped, and its container was quickly recreated.
    It will be the first and so far the only service that will work as part of the NAS.


    Configuration files are here .


    Here is his docker-compose file:


    /tank0/docker/services/test_nginx/docker-compose.yml
    version: '2'
    networks:
      docker0:
        external:
          name: docker0
    services:
      nginx-local:
        restart: always
        image: nginx:alpine
        expose:
          - 80
          - 443
        environment:
          - "VIRTUAL_HOST=test.NAS.cloudns.cc"
          - "VIRTUAL_PROTO=http"
          - "VIRTUAL_PORT=80"
          - CERT_NAME=NAS.cloudns.cc
        networks:
          - docker0

    Each container with a service must specify the following parameters:


    • docker0- external network. This is indicated in the title.
    • expose- set the ports in the network where the container works. Typically, port 80 for HTTP protocol and 443 for HTTPS protocol.
    • VIRTUAL_HOST=test.NAS.cloudns.cc - this variable contains the virtual host via which nginx-reverse-proxy will redirect the request to this container.
    • VIRTUAL_PROTO=http- protocol by which nginx-reverse-proxy will interact with this service. If there is no certificate, this is HTTP.
    • VIRTUAL_PORT=80 - the port on which nginx-reverse-proxy will contact.
    • CERT_NAME=NAS.cloudns.cc- the name of the external certificate. In this case, all services have one certificate, because the name is the same everywhere. NAS is the DNS zone name.
    • networks- in this section for all frontends that communicate with nginx-reverse-proxy the network must be specified docker0.

    The container is set up, now you need to pick it up. After completing docker-compose up, go to the address test.NAS.cloudns.cc.


    The following should be displayed on the console:


    $ docker-compose up
    Creating testnginx_nginx-local_1
    Attaching to testnginx_nginx-local_1
    nginx-local_1  | 172.22.0.5 - - [29/Jul/2018:15:32:02 +0000] "GET / HTTP/1.1" 200 612 "-""Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537 (KHTML, like Gecko) Chrome/67.0 Safari/537""192.168.2.3"
    nginx-local_1  | 2018/07/29 15:32:02 [error] 8#8: *2 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 172.22.0.5, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "test.NAS.cloudns.cc", referrer: "https://test.NAS.cloudns.cc/"
    nginx-local_1  | 172.22.0.5 - - [29/Jul/2018:15:32:02 +0000] "GET /favicon.ico HTTP/1.1" 404 572 "https://test.NAS.cloudns.cc/""Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537 (KHTML, like Gecko) Chrome/67.0 Safari/537""192.168.2.3"

    And the browser will show the following page:


    Running Nginx


    If, as a result, you have a page, as in the screenshot above, I can congratulate you: everything is set up and works correctly.


    Now this container is no longer needed, stop at Ctrl + C by doing this next docker-compose down.


    Configuring the container with local-rpoxy


    After setting up the proxy, it would be nice to raise the container with nginx-default with the server proxying requests for the host nas, omv and similar through the external network to ports 10080 and 10443 of the host's OS.


    Configuration files are here .


    /tank0/docker/services/nginx-local/docker-compose.yml
    version: '2'
    networks:
      docker0:
        external:
          name: docker0
    services:
      nginx-local:
        restart: always
        image: nginx:alpine
        expose:
          - 80
          - 443
        environment:
          - "VIRTUAL_HOST=NAS.cloudns.cc,nas,nas.*,www.*,omv.*,nas-controller.nas"
          - "VIRTUAL_PROTO=http"
          - "VIRTUAL_PORT=80"
          - CERT_NAME=NAS.cloudns.cc
        volumes:
          - ./local-config:/etc/nginx/conf.d
        networks:
          - docker0

    With the docker-compose configuration, everything should be clear, and I will not dwell on its description.
    The only thing I want to notice is that one of the domains NAS.cloudns.cc. This is done so that when accessing the NAS only by the name of the DNS zone, the request is transferred to the host.


    /tank0/docker/services/nginx-local/local-config/default.conf
    # If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the# scheme used to connect to this servermap $http_x_forwarded_proto $proxy_x_forwarded_proto {
      default $http_x_forwarded_proto;
      ''      $scheme;
    }
    # If we receive X-Forwarded-Port, pass it through; otherwise, pass along the# server port the client connected tomap $http_x_forwarded_port $proxy_x_forwarded_port {
      default $http_x_forwarded_port;
      ''      $server_port;
    }
    # Set appropriate X-Forwarded-Ssl headermap $scheme $proxy_x_forwarded_ssl {
      default off;
      https on;
    }
    access_log on;
    error_log on;
    # HTTP 1.1 support
    proxy_http_version 1.1;
    proxy_buffering off;
    proxy_set_header Host $http_host;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
    proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
    proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
    # Mitigate httpoxy attack (see README for details)
    proxy_set_header Proxy "";
    server {
            server_name _; # This is just an invalid value which will never trigger on a real hostname.listen80;
            return503;
    }
    server {
            server_name www.* nas.* omv.* "";
            listen80;
            location / {
                    proxy_pass https://172.21.0.1:10443/;
            }
    }
    # nas-controller
    server {
            server_name nas-controller.nas;
            listen80 ;
            location / {
                    proxy_pass https://nas-controller/;
            }
    }

    • 172.21.0.1- host network. The request is always redirected to port 443, because a certificate was previously generated and OMV works over HTTPS. Let it remain even for internal communication.
    • https://nas-controller/- in theory, this is the interface on which IPMI works, and if you refer to nas, like nas-controller.nas, the request will be redirected to the external address of nas-controller. Not particularly helpful.

    Install and configure LDAP


    Configure LDAP server


    An LDAP server is a central component of user management.
    It also works inside the Docker container. In which, besides it, interfaces for administration and change of passwords are started.


    Configuration files and LDIF files are located here .


    /tank0/docker/services/ldap/docker-compose.yml
    version: "2"
    networks:
      ldap:
      docker0:
        external:
          name: docker0
    services:
      open-ldap:
        image: "osixia/openldap:1.2.0"
        hostname: "open-ldap"
        restart: always
        environment:
          - "LDAP_ORGANISATION=NAS"
          - "LDAP_DOMAIN=nas.nas"
          - "LDAP_ADMIN_PASSWORD=ADMIN_PASSWORD"
          - "LDAP_CONFIG_PASSWORD=CONFIG_PASSWORD"
          - "LDAP_TLS=true"
          - "LDAP_TLS_ENFORCE=false"
          - "LDAP_TLS_CRT_FILENAME=ldap_server.crt"
          - "LDAP_TLS_KEY_FILENAME=ldap_server.key"
          - "LDAP_TLS_CA_CRT_FILENAME=ldap_server.crt"
        volumes:
          - ./certs:/container/service/slapd/assets/certs
          - ./ldap_data/var/lib:/var/lib/ldap
          - ./ldap_data/etc/ldap/slapd.d:/etc/ldap/slapd.d
        networks:
          - ldap
        ports:
          - 172.21.0.1:389:389
          - 172.21.0.1::636:636
      phpldapadmin:
        image: "osixia/phpldapadmin:0.7.1"
        hostname: "nas.nas"
        restart: always
        networks:
          - ldap
          - docker0
        expose:
          - 443
        links:
          - open-ldap:open-ldap-server
        volumes:
          - ./certs:/container/service/phpldapadmin/assets/apache2/certs
        environment:
          - VIRTUAL_HOST=ldap.*
          - VIRTUAL_PORT=443
          - VIRTUAL_PROTO=https
          - CERT_NAME=NAS.cloudns.cc
          - "PHPLDAPADMIN_LDAP_HOSTS=open-ldap-server"
          #- "PHPLDAPADMIN_HTTPS=false"
          - "PHPLDAPADMIN_HTTPS_CRT_FILENAME=certs/ldap_server.crt"
          - "PHPLDAPADMIN_HTTPS_KEY_FILENAME=private/ldap_server.key"
          - "PHPLDAPADMIN_HTTPS_CA_CRT_FILENAME=certs/ldap_server.crt"
          - "PHPLDAPADMIN_LDAP_CLIENT_TLS_REQCERT=allow"
      ldap-ssp:
        image: openfrontier/ldap-ssp:https
        volumes:
          #- ./ssp/mods-enabled/ssl.conf:/etc/apache2/mods-enabled/ssl.conf
          - /etc/ssl/certs/ssl-cert-snakeoil.pem:/etc/ssl/certs/ssl-cert-snakeoil.pem
          - /etc/ssl/private/ssl-cert-snakeoil.key:/etc/ssl/private/ssl-cert-snakeoil.key
        restart: always
        networks:
          - ldap
          - docker0
        expose:
          - 80
        links:
          - open-ldap:open-ldap-server
        environment:
          - VIRTUAL_HOST=ssp.*
          - VIRTUAL_PORT=80
          - VIRTUAL_PROTO=http
          - CERT_NAME=NAS.cloudns.cc
          - "LDAP_URL=ldap://open-ldap-server:389"
          - "LDAP_BINDDN=cn=admin,dc=nas,dc=nas"
          - "LDAP_BINDPW=ADMIN_PASSWORD"
          - "LDAP_BASE=ou=users,dc=nas,dc=nas"
          - "MAIL_FROM=admin@nas.nas"
          - "PWD_MIN_LENGTH=8"
          - "PWD_MIN_LOWER=3"
          - "PWD_MIN_DIGIT=2"
          - "SMTP_HOST="
          - "SMTP_USER="
          - "SMTP_PASS="

    The config describes three services:


    • open-ldap - LDAP server.
    • phpldapadmin- WEB-interface for its administration. Through it, it is possible to add and delete users, groups, etc.
    • ldap-ssp - WEB-interface for changing passwords by users.

    An LDAP server requires setting up some parameters that are set via environment variables:


    • LDAP_ORGANISATION=NAS- the name of the organization. May be arbitrary.
    • LDAP_DOMAIN=nas.nas- domain. Also arbitrary. It is better to specify the same as the domain name.
    • LDAP_ADMIN_PASSWORD=ADMIN_PASSWORD - admin password.
    • LDAP_CONFIG_PASSWORD=CONFIG_PASSWORD - password for configuration.

    In theory, it doesn’t stop adding a read-only user, but then.


    Toma:


    • /container/service/slapd/assets/certsmapped to local directory certs- certificates. Now not used.
    • ./ldap_data/- local directory, subdirectories of which are forwarded to two directories inside containers. Here LDAP stores its base.

    The server is running on the internal network ldap , but its ports 389 (unprotected LDAP) and 636 (LDAP over SSL, not yet used) are forwarded to the host network.


    PhpLdapAdmin works in two networks: it accesses the LDAP server on the network ldapand opens port 443 on the network docker0so that it can be accessed by nginx-reverse-proxy.


    Settings:


    • VIRTUAL_HOST=ldap.* - the host to which the nginx-reverse-proxy will map the container.
    • VIRTUAL_PORT=443 - port for nginx-reverse-proxy.
    • VIRTUAL_PROTO=https - protocol for nginx-reverse-proxy.
    • CERT_NAME=NAS.cloudns.cc - certificate name, the same for all.

    The block of variables is then intended for setting up SSL and is not required at this time.


    SSP is available over HTTP and also works in two networks.
    Volumes, in this container are not used, and the certificate forwarded remains from old experiments.


    The variables to configure are the restrictions on the length of the password and the credentials for accessing the LDAP server.


    • LDAP_URL=ldap://open-ldap-server:389- address and port of the LDAP server (see section links).
    • LDAP_BINDDN=cn=admin,dc=nas,dc=nas - administrator login and domain for authentication.
    • LDAP_BINDPW=ADMIN_PASSWORD - administrator password, which must match the password specified for the open-ldap container.
    • LDAP_BASE=ou=users,dc=nas,dc=nas - This is the basic path that contains user credentials.

    Install LDAP utilities on the host machine and initialize the LDAP directory:


    apt-get install ldap-utils
    ldapadd -x -H ldap://172.21.0.1  -D "cn=admin,dc=nas,dc=nas" -W -f ldifs/inititialize_ldap.ldif
    ldapadd -x -H ldap://172.21.0.1  -D "cn=admin,dc=nas,dc=nas" -W -f ldifs/base.ldif
    ldapadd -x -H ldap://172.21.0.1  -D "cn=admin,cn=config" -W -f ldifs/gitlab_attr.ldif

    An gitlab_attr.ldifattribute is added in which Gitlab (about it later) will find users.
    After that you can run the following command to check.


    LDAP server health check
    $ ldapsearch -x -H ldap://172.21.0.1 -b dc=nas,dc=nas -D "cn=admin,dc=nas,dc=nas" -W
    Enter LDAP Password: 
    # extended LDIF## LDAPv3# base <dc=nas,dc=nas> with scope subtree# filter: (objectclass=*)# requesting: ALL## nas.nas
    dn: dc=nas,dc=nas
    objectClass: top
    objectClass: dcObject
    objectClass: organization
    o: NAS
    dc: nas
    # admin, nas.nas
    dn: cn=admin,dc=nas,dc=nas
    objectClass: simpleSecurityObject
    objectClass: organizationalRole
    cn: admin
    description: LDAP administrator
    ...
    # ldap_users, groups, nas.nas
    dn: cn=ldap_users,ou=groups,dc=nas,dc=nas
    cn: ldap_users
    gidNumber: 500
    objectClass: posixGroup
    objectClass: top
    # search result
    search: 2
    result: 0 Success
    # numResponses: 12# numEntries: 11

    This completes the LDAP server setup. You can manage the server through the WEB-interface.


    Set up OMV for LDAP login


    If the LDAP server is configured and working, OMV is configured to work with it very simply: you specify the host, port, data for authorization, the root directory for searching for users and the attribute to determine that the found entry is a user account.


    LDAP plugin you should have already installed.


    Everything is shown in the screenshot:


    Configure OMV to work with LDAP


    Interaction with the power source


    First, configure the UPS according to the instructions that comes with it, and connect it to the NAS via USB.
    You should have installed the plugin for working with UPS.
    Now it remains only to configure the NUT via the OMV GUI.
    Go to the page "Services-> UPS" , turn on the UPS, in the identifier field enter any line describing the UPS, for example "eaton".


    In the "Driver Configuration Directives" field , enter the following:


    driver = usbhid-ups
    port = auto
    desc = "Eaton 9130 700 VA"
    vendorid = 0463
    pollinterval = 10

    • driver = usbhid-ups - The UPS is connected via USB, therefore the USB HID driver is used.
    • vendoridIs the identifier of the UPS manufacturer that can be obtained by the command lsusb.
    • pollinterval - UPS interrogation interval in seconds.

    The remaining parameters can be viewed in the documentation .


    The output lsusbline with the UPS is indicated by the arrow:


    # lsusb 
    Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
    --> Bus 001 Device 003: ID 0463:ffff MGE UPS Systems UPS
    Bus 001 Device 004: ID 046b:ff10 American Megatrends, Inc. Virtual Keyboard and Mouse
    Bus 001 Device 002: ID 046b:ff01 American Megatrends, Inc. 
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    "Shutdown mode" must be set to "low battery".
    It should turn out approximately as shown in the screenshot:


    UPS setup


    Turn off the UPS and turn it on again. If notifications have been set up, you will receive an email about the loss of power.
    This is the end of the UPS setup.


    Conclusion


    On this basis, the system is installed and configured. Despite the fact that much has been done here from the console, it is not at all necessary to do so, I just think that it is more convenient.
    But one of the advantages of the system is its flexibility.


    If you want to act differently, OMV will allow you this.


    Network management from the WEB interface is available, and in some respects it is more convenient than through the console:



    For Docker there is also a very clear WEB-interface:



    In addition, OMV can draw beautiful graphics.


    Network usage schedule:


    Network usage graph


    Memory usage schedule:


    Memory usage graph


    CPU usage schedule:


    CPU usage schedule


    Unrealized


    • Setup problems are a separate big topic. It is possible that the first time something does not work. In this case, to help youdocker-compose exec , as well as a careful study of the documentation and source code.
    • LDAP server would not hurt to configure better, especially in terms of security (use SSL everywhere, add a user to read, etc.).
    • While the issues of trusted download and security have not been addressed at all, I know about this, but at another time.
    • User ValdikSS gave very useful advice to use DropbearSSH embedded in initramfs to solve the problem of unintentional reboots. This will be another article.

    That's all.
    With God!



    Also popular now: