Alternative nomad orchestrator on the desktop
- Tutorial
Container orchestration is currently associated primarily with kubernetes. But this is not the only possible choice. There are alternative orchestration tools, such as nomad, the developer of HashiCorp (well known as the Vagrant virtualization developer).
Mastering work with orchestration is usually difficult, because not everyone has access to the infrastructure from several servers with root access, therefore, in the previous post we
deploy Kubernetes on the desktop in a few minutes with MicroK8sThe process of deploying a Kubernetes desktop environment was described using the Django web application as an example. Initially, I planned to continue describing the database exploitation in the MicroK8s environment. But then I thought it would be interesting to continue working with an equally convenient nomad orchestration tool. I will not even give a German to compare different systems of orchestration. The only thing I notice for those who doubt that nomad is even easier to install than MicroK8s, because To do this, simply copy the two executable files (nomad and consul) from the developer's server.
So, as I said, first you need to load nomad and consul, which are delivered as ready-made binary files for all major operating systems. You do not need root access to run these files, so everything can be placed in the home directory and run on behalf of a non-privileged user.
And, of course, you should already have docker installed, if only you are going to reorder docker containers. By the way, nomad can run not only containers, but also regular executables, which we will use soon.
So, first you need to create a nomad configuration file. Nomad can be launched in server mode or in client mode, as well as simultaneously in both modes (not recommended on production). To do this, you must place the server section, the client section, or both of these sections in the configuration file:
Nomad is launched by a command that indicates the path to the created configuration file:
In the last section of the configuration, the address at which consul will work is specified. Consul can also work in server mode, in client mode, and in both server and client mode:
After executing these commands, you can open in the browser (http: // localhost: 4646) - this is the UI nomad, and (http: // localhost: 8500) - this is the UI consul.
Next, create a Dockerfile for the Django image. From the Dockerfile in the previous post, it differs from the line in which access to Django is allowed through any host:
After container building:
You need to create a task in which the required number of replicas of Django containers will be created (nomad / django.conf):
All parameters of this configuration are quite understandable based on their names. The only thing I would like to decrypt is one line:
The task is started by the command:
Now, through the UI nomad (http: // localhost: 4646), you can see the status of the django-job job, and through the UI consul (http: // localhost: 8500), you can see the status of the django service, and also on which IP addresses and ports it works each replica of the django service. Now services are available through these ip-addresses, but only inside the nomad network, and are not accessible from the outside. In order to publish services for external access, you can use a number of features. For example, through haproxy, but the easiest way to do this is through another (third after nomad and consul) module from HashiCorp - fabio.
You won’t need to download it either - you can provide this nomad, which, as I mentioned at the beginning of the message, can work not only with docker containers, but also with any executable files. To do this, create another task (nomad / fabio.conf):
A driver is used to complete this task
By the way, in the new versions of nomad, the syntax of loading plugins and drivers will change, so this part of the configuration will have to be finalized soon.
The task is started by the command:
After that, the UI fabio is available in the browser at the address (http: // localhost: 9998). And at the address (http: // localhost: 9999) the django service will be published.
The code for the configurations shown in the publication can be found in the repository .
Useful links
1. dvps.blog/minimalnoie-sravnieniie-swarm-kubernetes-mesos-nomad-rancher
papacy@gmail.com
February 20, 2019
Mastering work with orchestration is usually difficult, because not everyone has access to the infrastructure from several servers with root access, therefore, in the previous post we
deploy Kubernetes on the desktop in a few minutes with MicroK8sThe process of deploying a Kubernetes desktop environment was described using the Django web application as an example. Initially, I planned to continue describing the database exploitation in the MicroK8s environment. But then I thought it would be interesting to continue working with an equally convenient nomad orchestration tool. I will not even give a German to compare different systems of orchestration. The only thing I notice for those who doubt that nomad is even easier to install than MicroK8s, because To do this, simply copy the two executable files (nomad and consul) from the developer's server.
So, as I said, first you need to load nomad and consul, which are delivered as ready-made binary files for all major operating systems. You do not need root access to run these files, so everything can be placed in the home directory and run on behalf of a non-privileged user.
And, of course, you should already have docker installed, if only you are going to reorder docker containers. By the way, nomad can run not only containers, but also regular executables, which we will use soon.
So, first you need to create a nomad configuration file. Nomad can be launched in server mode or in client mode, as well as simultaneously in both modes (not recommended on production). To do this, you must place the server section, the client section, or both of these sections in the configuration file:
bind_addr = "127.0.0.1"
data_dir = "/tmp/nomad"
advertise {
http = "127.0.0.1"
rpc = "127.0.0.1"
serf = "127.0.0.1"
}
server {
enabled = true
bootstrap_expect = 1
}
client {
enabled = true
options = {
"driver.raw_exec.enable" = "1"
}
}
consul {
address = "127.0.0.1:8500"
}
Nomad is launched by a command that indicates the path to the created configuration file:
nomad agent --config nomad/nomad.conf
In the last section of the configuration, the address at which consul will work is specified. Consul can also work in server mode, in client mode, and in both server and client mode:
consul agent -server -client 127.0.0.1 -advertise 127.0.0.1 -data-dir /tmp/consul -ui -bootstrap
After executing these commands, you can open in the browser (http: // localhost: 4646) - this is the UI nomad, and (http: // localhost: 8500) - this is the UI consul.
Next, create a Dockerfile for the Django image. From the Dockerfile in the previous post, it differs from the line in which access to Django is allowed through any host:
FROM python:3-slim
LABEL maintainer="apapacy@gmail.com"
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN django-admin startproject mysite /app \
&& echo "\nALLOWED_HOSTS = ['*']\n" >> /app/mysite/settings.py
EXPOSE 8000
STOPSIGNAL SIGINT
ENTRYPOINT ["python", "manage.py"]
CMD ["runserver", "0.0.0.0:8000"]
After container building:
docker build django/ -t apapacy/tut-django:1.0.1
You need to create a task in which the required number of replicas of Django containers will be created (nomad / django.conf):
job "django-job" {
datacenters = ["dc1"]
type = "service"
group "django-group" {
count = 3
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
ephemeral_disk {
size = 300
}
task "django-job" {
driver = "docker"
config {
image = "apapacy/tut-django:1.0.1"
port_map {
lb = 8000
}
}
resources {
network {
mbits = 10
port "lb" {}
}
}
service {
name = "django"
tags = ["urlprefix-/"]
port = "lb"
check {
name = "alive"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
}
}
}
All parameters of this configuration are quite understandable based on their names. The only thing I would like to decrypt is one line:
port "lb" {}
which means that the ports will be dynamically assigned by the environment (you can also set them statically). The task is started by the command:
nomad job run nomad/django.conf
Now, through the UI nomad (http: // localhost: 4646), you can see the status of the django-job job, and through the UI consul (http: // localhost: 8500), you can see the status of the django service, and also on which IP addresses and ports it works each replica of the django service. Now services are available through these ip-addresses, but only inside the nomad network, and are not accessible from the outside. In order to publish services for external access, you can use a number of features. For example, through haproxy, but the easiest way to do this is through another (third after nomad and consul) module from HashiCorp - fabio.
You won’t need to download it either - you can provide this nomad, which, as I mentioned at the beginning of the message, can work not only with docker containers, but also with any executable files. To do this, create another task (nomad / fabio.conf):
job "fabio-job" {
datacenters = ["dc1"]
type = "system"
update {
stagger = "60s"
max_parallel = 1
}
group "fabio-group" {
count = 1
task "fabio-task" {
driver = "raw_exec"
artifact {
source = "https://github.com/fabiolb/fabio/releases/download/v1.5.4/fabio-1.5.4-go1.9.2-linux_amd64"
}
config {
command = "fabio-1.5.4-go1.9.2-linux_amd64"
}
resources {
cpu = 100 # 500 MHz
memory = 128 # 256MB
network {
mbits = 10
port "lb" {
static = 9999
}
port "admin" {
static = 9998
}
}
}
}
}
}
A driver is used to complete this task
driver = "raw_exec"
. Not all drivers are loaded by default, so in the nomad configuration we provided this feature:client {
enabled = true
options = {
"driver.raw_exec.enable" = "1"
}
}
By the way, in the new versions of nomad, the syntax of loading plugins and drivers will change, so this part of the configuration will have to be finalized soon.
The task is started by the command:
nomad job run nomad/fabio.conf
After that, the UI fabio is available in the browser at the address (http: // localhost: 9998). And at the address (http: // localhost: 9999) the django service will be published.
The code for the configurations shown in the publication can be found in the repository .
Useful links
1. dvps.blog/minimalnoie-sravnieniie-swarm-kubernetes-mesos-nomad-rancher
papacy@gmail.com
February 20, 2019