Configure Nomad Cluster with Consul and Integrate with Gitlab
- Tutorial
Introduction
Recently, the popularity of Kubernetes is growing rapidly - more and more projects are implementing it at home. I wanted to touch on such an orchestrator as Nomad: it is perfect for projects that already use other solutions from HashiCorp, for example, Vault and Consul, and the projects themselves are not complicated in terms of infrastructure. This article will provide instructions for installing Nomad, combining two nodes into a cluster, and integrating Nomad with Gitlab.

Test stand
A little bit about the test bench: three virtual servers are used with the characteristics of 2 CPU, 4 RAM, 50 Gb SSD, united in a common local network. Their names and IP addresses:
- nomad-livelinux-01 : 172.30.0.5
- nomad-livelinux-02 : 172.30.0.10
- consul-livelinux-01 : 172.30.0.15
Installation of Nomad, Consul. Creating a Nomad Cluster
Let's proceed to the basic installation. Despite the ease of installation, I will describe it for the integrity of the article: in fact, it was created from drafts and notes for quick access if necessary.
Before starting the practice, we will discuss the theoretical part, because at this stage it is important to understand the future structure.
We have two nomad nodes and we want to merge them into a cluster, also for the future we will need automatic scaling of the cluster - for this we need Consul. Using this tool, clustering and adding new nodes becomes a very simple task: the created Nomad node connects to the Consul agent, and then connects to the existing Nomad cluster. Therefore, in the beginning we will install the Consul server, configure the basic http authorization for the web panel (it is by default without authorization and can be accessed at an external address), as well as the Consul agents themselves on the Nomad servers, after which we just start Nomad.
Installing HashiCorp tools is very simple: in fact, we simply move the binary file to the bin directory, configure the tool’s configuration file and create its service file.
We load the binary file Consul and unpack it into the user's home directory:
root@consul-livelinux-01:~# wget https://releases.hashicorp.com/consul/1.5.0/consul_1.5.0_linux_amd64.zip
root@consul-livelinux-01:~# unzip consul_1.5.0_linux_amd64.zip
root@consul-livelinux-01:~# mv consul /usr/local/bin/
Now we have a ready-made binary consul file for further configuration.
To work with Consul, we need to create a unique key using the keygen command:
root@consul-livelinux-01:~# consul keygen
Let's move on to configuring the Consul, create the /etc/consul.d/ directory with the following structure:
/etc/consul.d/
├── bootstrap
│ └── config.json
The bootstrap directory will contain the config.json configuration file - in it we will set the Consul settings. Its contents:
{
"bootstrap": true,
"server": true,
"datacenter": "dc1",
"data_dir": "/var/consul",
"encrypt": "your-key",
"log_level": "INFO",
"enable_syslog": true,
"start_join": ["172.30.0.15"]
}
Let us examine separately the main directives and their meanings:
- bootstrap : true. We turn on the automatic addition of new nodes if they are connected. I note that we do not indicate here the exact number of expected nodes.
- server : true. Turn on server mode. Consul on this virtual machine will be the only server and master at the moment, VM Nomad will be clients.
- datacenter : dc1. Specify the name of the data center to create a cluster. It must be identical on both clients and servers.
- encrypt : your-key. A key that must also be unique and match on all clients and servers. Generated using the consul keygen command.
- start_join . In this list we indicate the list of IP addresses to which the connection will be made. At the moment, we leave only our own address.
At this point, we can start consul using the command line:
root@consul-livelinux-01:~# /usr/local/bin/consul agent -config-dir /etc/consul.d/bootstrap -ui
This is a good way to debug now, however, on an ongoing basis, using this method will not work for obvious reasons. Create a service file for managing Consul through systemd:
root@consul-livelinux-01:~# nano /etc/systemd/system/consul.service
The contents of the consul.service file:
[Unit]
Description=Consul Startup process
After=network.target
[Service]
Type=simple
ExecStart=/bin/bash -c '/usr/local/bin/consul agent -config-dir /etc/consul.d/bootstrap -ui'
TimeoutStartSec=0
[Install]
WantedBy=default.target
Run Consul through systemctl:
root@consul-livelinux-01:~# systemctl start consul
We check: our service should be started, and by running the consul members command we should see our server:
root@consul-livelinux:/etc/consul.d# consul members
consul-livelinux 172.30.0.15:8301 alive server 1.5.0 2 dc1 <all>
The next step: installing Nginx and setting up proxy, http authorization. Install nginx through the package manager and in the directory / etc / nginx / sites-enabled create the configuration file consul.conf with the following contents:
upstream consul-auth {
server localhost:8500;
}
server {
server_name consul.doman.name;
location / {
proxy_pass http://consul-auth;
proxy_set_header Host $host;
auth_basic_user_file /etc/nginx/.htpasswd;
auth_basic"Password-protected Area";
}
}
Do not forget to create a .htpasswd file and generate a username and password for it. This item is required so that the web panel is not accessible to everyone who knows our domain. However, when configuring Gitlab, we will have to abandon this - otherwise we will not be able to deploy our application to Nomad. In my project, both Gitlab and Nomad are only on the gray network, so there is no such problem.
On the other two servers, install Consul agents according to the following instructions. Repeat the steps with the binary file:
root@nomad-livelinux-01:~# wget https://releases.hashicorp.com/consul/1.5.0/consul_1.5.0_linux_amd64.zip
root@nomad-livelinux-01:~# unzip consul_1.5.0_linux_amd64.zip
root@nomad-livelinux-01:~# mv consul /usr/local/bin/
By analogy with the previous server, we create a directory for the configuration files /etc/consul.d with the following structure:
/etc/consul.d/
├── client
│ └── config.json
The contents of the config.json file:
{
"datacenter": "dc1",
"data_dir": "/opt/consul",
"log_level": "DEBUG",
"node_name": "nomad-livelinux-01",
"server": false,
"encrypt": "your-private-key",
"domain": "livelinux",
"addresses": {
"dns": "127.0.0.1",
"https": "0.0.0.0",
"grpc": "127.0.0.1",
"http": "127.0.0.1"
},
"bind_addr": "172.30.0.5", # локальный адрес вм
"start_join": ["172.30.0.15"], # удаленный адрес консул сервера
"ports": {
"dns": 53
}
}
We save the changes and proceed to configure the service file, its contents:
/etc/systemd/system/consul.service:
[Unit]
Description="HashiCorp Consul - A service mesh solution"
Documentation=https://www.consul.io/
Requires=network-online.target
After=network-online.target
[Service]
User=root
Group=root
ExecStart=/usr/local/bin/consul agent -config-dir=/etc/consul.d/client
ExecReload=/usr/local/bin/consul reload
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
We start consul on the server. Now, after starting, we should see the configured service in nsul members. This will mean that he successfully connected to the cluster as a client. Repeat the same on the second server and after that we will be able to start installing and configuring Nomad.
A more detailed installation of Nomad is described in its official documentation. There are two traditional installation methods: downloading a binary file and compiling from source. I will choose the first method.
Note : the project is developing very fast, new updates often come out. Perhaps, by the time the article is completed, a new version will be released. Therefore, I recommend that before reading to check the current version of Nomad at the moment and download it.
root@nomad-livelinux-01:~# wget https://releases.hashicorp.com/nomad/0.9.1/nomad_0.9.1_linux_amd64.zip
root@nomad-livelinux-01:~# unzip nomad_0.9.1_linux_amd64.zip
root@nomad-livelinux-01:~# mv nomad /usr/local/bin/
root@nomad-livelinux-01:~# nomad -autocomplete-install
root@nomad-livelinux-01:~# complete -C /usr/local/bin/nomad nomad
root@nomad-livelinux-01:~# mkdir /etc/nomad.d
After unpacking, we get a Nomad binary file weighing 65 MB - you need to move it to / usr / local / bin.
Create a data directory for Nomad and edit its service file (it most likely will not exist at the beginning):
root@nomad-livelinux-01:~# mkdir --parents /opt/nomad
root@nomad-livelinux-01:~# nano /etc/systemd/system/nomad.service
Insert the following lines there:
[Unit]
Description=Nomad
Documentation=https://nomadproject.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/local/bin/nomad agent -config /etc/nomad.d
KillMode=process
KillSignal=SIGINT
LimitNOFILE=infinity
LimitNPROC=infinity
Restart=on-failure
RestartSec=2
StartLimitBurst=3
StartLimitIntervalSec=10
TasksMax=infinity
[Install]
WantedBy=multi-user.target
However, we are in no hurry to run nomad - we have not yet created its configuration file:
root@nomad-livelinux-01:~# mkdir --parents /etc/nomad.d
root@nomad-livelinux-01:~# chmod 700 /etc/nomad.d
root@nomad-livelinux-01:~# nano /etc/nomad.d/nomad.hcl
root@nomad-livelinux-01:~# nano /etc/nomad.d/server.hcl
The final directory structure will be as follows:
/etc/nomad.d/
├── nomad.hcl
└── server.hcl
The nomad.hcl file should contain the following configuration:
datacenter = "dc1"
data_dir = "/opt/nomad"
The contents of the server.hcl file:
server {
enabled = true
bootstrap_expect = 1
}
consul {
address = "127.0.0.1:8500"
server_service_name = "nomad"
client_service_name = "nomad-client"
auto_advertise = true
server_auto_join = true
client_auto_join = true
}
bind_addr = "127.0.0.1"
advertise {
http = "172.30.0.5"
}
client {
enabled = true
}
Do not forget to change the configuration file on the second server - there you will need to change the value of the http directive.
The last at this stage is the configuration of Nginx for proxying and setting http authorization. The contents of the nomad.conf file:
upstream nomad-auth {
server 172.30.0.5:4646;
}
server {
server_name nomad.domain.name;
location / {
proxy_pass http://nomad-auth;
proxy_set_header Host $host;
auth_basic_user_file /etc/nginx/.htpasswd;
auth_basic "Password-protected Area";
}
}
Now we can access the web panel via an external network. We connect and go to the servers page:

Image 1. List of servers in the Nomad cluster
Both servers are successfully displayed in the panel, we will see the same in the output of the nomad node status command:

Image 2. Output of the nomad node status command
What does Consul do? Let's get a look. We go to the Consul control panel, to the nodes page:

Image 3. The list of nodes in the Consul cluster
Now we have a prepared Nomad working in conjunction with Consul. In the final stage, we will begin the most interesting: we will configure the delivery of Docker containers from Gitlab to Nomad, and also talk about some of its other distinctive features.
Creating Gitlab Runner
For deploying docker images to Nomad, we will use a separate runner with the Nomad binary file inside (here, by the way, one more feature of Hashicorp applications can be noted - individually they are the only binary file). Download it to the runner directory. For him, create the simplest Dockerfile with the following contents:
FROM alpine:3.9
RUN apk add --update --no-cache libc6-compat gettext
COPY nomad /usr/local/bin/nomad
In the same project, create .gitlab-ci.yml:
variables:
DOCKER_IMAGE: nomad/nomad-deploy
DOCKER_REGISTRY: registry.domain.name
stages:
- build
build:
stage: build
image: ${DOCKER_REGISTRY}/nomad/alpine:3
script:
- tag=${DOCKER_REGISTRY}/${DOCKER_IMAGE}:latest
- docker build --pull -t ${tag} -f Dockerfile .
- docker push ${tag}
As a result, we will have an accessible image of the Nomad runner in the Gitlab Registry, now we can go directly to the project repository, create a Pipeline and configure the nomad job Nomad.
Project setup
Let's start with the job's file for Nomad. My project in this article will be quite primitive: it will consist of one task. The contents of .gitlab-ci will be as follows:
variables:
NOMAD_ADDR: http://nomad.address.service:4646
DOCKER_REGISTRY: registry.domain.name
DOCKER_IMAGE: example/project
stages:
- build
- deploy
build:
stage: build
image: ${DOCKER_REGISTRY}/nomad-runner/alpine:3
script:
- tag=${DOCKER_REGISTRY}/${DOCKER_IMAGE}:${CI_COMMIT_SHORT_SHA}
- docker build --pull -t ${tag} -f Dockerfile .
- docker push ${tag}
deploy:
stage: deploy
image: registry.example.com/nomad/nomad-runner:latest
script:
- envsubst '${CI_COMMIT_SHORT_SHA}' < project.nomad > job.nomad
- cat job.nomad
- nomad validate job.nomad
- nomad plan job.nomad || if [ $? -eq 255 ]; thenexit 255; elseecho"success"; fi
- nomad run job.nomad
environment:
name: production
allow_failure: false
when: manual
Here the deployment occurs in manual mode, but you can configure it to change the contents of the project directory. Pipeline consists of two stages: from the assembly of the image and its deployment to the nomad. At the first stage, we collect the docker image and push it into our Registry; at the second, we launch our job in Nomad.
job "monitoring-status" {
datacenters = ["dc1"]
migrate {
max_parallel = 3
health_check = "checks"
min_healthy_time = "15s"
healthy_deadline = "5m"
}
group "zhadan.ltd" {
count = 1
update {
max_parallel = 1
min_healthy_time = "30s"
healthy_deadline = "5m"
progress_deadline = "10m"
auto_revert = true
}
task "service-monitoring" {
driver = "docker"
config {
image = "registry.domain.name/example/project:${CI_COMMIT_SHORT_SHA}"
force_pull = true
auth {
username = "gitlab_user"
password = "gitlab_password"
}
port_map {
http = 8000
}
}
resources {
network {
port "http" {}
}
}
}
}
}
Please note that I have a private Registry and for a successful docker image pool I need to log in to it. The best solution in this case is to enter a login and password in Vault with its subsequent integration with Nomad. Nomad natively supports Vault. But first, in Vault itself, we will install the necessary policies for Nomad, you can download them:
# Download the policy and token role
$ curl https://nomadproject.io/data/vault/nomad-server-policy.hcl -O -s -L
$ curl https://nomadproject.io/data/vault/nomad-cluster-role.json -O -s -L
# Write the policy to Vault
$ vault policy write nomad-server nomad-server-policy.hcl
# Create the token role with Vault
$ vault write /auth/token/roles/nomad-cluster @nomad-cluster-role.json
Now, having created the necessary policies, we will add integration with Vault in the task block in the job.nomad file:
vault {
enabled = true
address = "https://vault.domain.name:8200"
token = "token"
}
I use authorization by token and write it directly here, there is also the option of specifying the token as a variable when running nomad agent:
$ VAULT_TOKEN=<token> nomad agent -config /path/to/config
Now we can use the keys with Vault. The principle of operation is simple: we create a file in Nomad job, which will store the values of variables, for example:
template {
data = <<EOH
{{with secret "secrets/pipeline-keys"}}
REGISTRY_LOGIN="{{ .Data.REGISTRY_LOGIN }}"
REGISTRY_PASSWORD="{{ .Data.REGISTRY_LOGIN }}{{ end }}"
EOH
destination = "secrets/service-name.env"
env = true
}
With this simple approach, you can configure the delivery of containers to the Nomad cluster and work with it in the future. I’ll say that to some extent I sympathize with Nomad - it is more suitable for small projects where Kubernetes can cause additional difficulties and will not realize its potential to the end. In addition, Nomad is perfect for beginners - it is easy to install and configure. However, when testing on some projects, I encounter the problem of its earlier versions - many basic functions simply do not exist or they do not work correctly. Nevertheless, I believe that Nomad will continue to develop and in the future will acquire all the necessary functions.
Posted by Ilya Andreev, edited by Alexei Zhadan and Live Linux Team