
Install HA Master Kubernetes Cluster Using Kubespray
- Tutorial

Kubespray (formerly Kargo) is a set of Ansible roles for installing and configuring the Kubernetes container orchestration system. In this case, AWS, GCE, Azure, OpenStack or ordinary virtual machines can act as IaaS. The project used to be called Kargo. This is an open source project and an open development model, therefore, at will, everyone can influence his life cycle.
On Habré already wrote about installing Kubernetes using Kubeadm, but there are significant drawbacks to this method: it still does not support multimaster configurations and, at times, is not very flexible. Kubespray, although it uses Kubeadm under the hood, already has high availability functionality for both the wizard and etcd at the installation stage. You can read about comparing it with other relevant Kubernetes installation methods using the link https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md
In this article we will create 5 servers on the Ubuntu 16.04 OS. In my case, their list will be as follows:
192.168.20.10 k8s-m1.me
192.168.20.11 k8s-m2.me
192.168.20.12 k8s-m3.me
192.168.20.13 k8s-s1.me
192.168.20.14 k8s-s2.me
Add them to / etc / hosts of all these servers, including the local system, or to the dns server. The firewall and other network restrictions of these hosts must be deactivated. In addition, you must enable IPv4 forwarding, and each of the hosts must have free access to the Internet to download docker images.
Copy the public rsa-key to each server from the list:
$ ssh-copy-id ubuntu@server.me
Specify the required user and key to connect from the local machine:
$ vim ~/.ssh/config
...
Host *.me
User ubuntu
ServerAliveInterval 60
IdentityFile ~/.ssh/id_rsa
Where ubuntu is the user on behalf of whom the connection to the server will take place, and id_rsa is the private key. Moreover, this user needs the ability to execute sudo commands without a password .
Clone the Kubespray repository:
$ git clone https://github.com/kubernetes-incubator/kubespray.git
After we copy the inventory directory to edit its contents:
$ cp -r inventory my_inventory
$ cd my_inventory
As an example, we use inventory.example :
$ mv inventory.example inventory
$ vim inventory
k8s-m1.me ip=192.168.20.10
k8s-m2.me ip=192.168.20.11
k8s-m3.me ip=192.168.20.12
k8s-s1.me ip=192.168.20.13
k8s-s2.me ip=192.168.20.14
[kube-master]
k8s-m1.me
k8s-m2.me
k8s-m3.me
[etcd]
k8s-m1.me
k8s-m2.me
k8s-m3.me
[kube-node]
k8s-s1.me
k8s-s2.me
[k8s-cluster:children]
kube-node
kube-master
Based on the above, we will install the HA installation of Kubernetes: etcd, the cluster configuration parameters repository will consist of 3 nodes for the presence of a quorum, and Kubernetes Master services (kube-apiserver, controller-manager, scheduler, etc.) will be duplicated three times. Of course, nothing prevents getting the etcd service completely separate.
At this stage, I would like to talk a little more about how the HA mode for wizards is implemented. On each Kubernetes worker (in our case, it is k8s-s * .me), Nginx will be installed in the balancing mode, in the upstream of which all Kubernetes wizards will be described:
stream {
upstream kube_apiserver {
least_conn;
server kube-master_ip1:6443;
server kube-master_ip2:6443;
server kube-master_ip3:6443;
}
server {
listen 127.0.0.1:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
Accordingly, if one of the masters crashes, Nginx will exclude it from upstream and stop forwarding requests to such a server.

This scheme eliminates a single point of failure: in the event of a wizard crash, another wizard will take over the job, and the Nginx responsible for forwarding requests works on every worker.
At the stage of cluster installation, it is possible to disable this internal balancer and already take care of everything yourself. It can be, for example, some third-party Nginx or HAProxy. However, do not forget that to ensure high availability they must work in pairs, between the members of which, if necessary, Virtual IP should migrate. VIP can be implemented using various technologies such as Keepalived, Heartbeat, Pacemaker, etc.
On the wizard, kube-apiserver works simultaneously on 2 ports: local 8080 without encryption (for services running on the same server) and external HTTPS 6443. The latter, as I have already mentioned, is used to communicate with workers and can be useful if the services are single masters (kubelet, kube-proxy, etc.) must be moved to other hosts.
We will continue to work on creating a test cluster. Edit group_vars / all.yml :
$ vim group_vars/all.yml
...
bootstrap_os: ubuntu
...
kubelet_load_modules: true
In addition to Ubuntu 16.04, Kubespray also supports installation on nodes with CoreOS, Debian Jessie, CentOS / RHEL 7, that is, on all major current distributions.
If necessary, you should also look at group_vars / k8s-cluster.yml , where you can specify the necessary version of Kubernetes to be installed, select the plug-in for the overlay network (by default it is calico, but other options are also available ), install efk (elasticsearch / fluentd / kibana), helm, istio, netchecker, etc.
I also recommend looking at roles / kubernetes / preinstall / tasks / verify-settings.yml. Here are the basic checks that will be performed before starting the installation of Kubernetes. For example, checking the availability of a sufficient amount of RAM (at the moment, it is not less than 1500MB for masters and 1000MB for nodes ), the number of etcd servers (there must be an odd number of them to ensure quorum), and so on. In recent releases of Kubespray, an additional requirement for swap has appeared: it must be turned off on all nodes of the cluster.
If Ansible is still missing on the local system, install it with the netaddr module:
# pip install ansible
# pip install netaddr
It is important to note that the netaddr and ansible modules must work with the same version of Python.
After that, we can proceed with the installation of the Kubernetes cluster:
$ ansible-playbook -i my_inventory/inventory cluster.yml -b -v
Alternatively, the rsa key and user for connection can be passed in arguments, for example:
$ ansible-playbook -u ubuntu -i my_inventory/inventory cluster.yml -b -v --private-key=~/.ssh/id_rsa
It usually takes about 15-20 minutes to install a cluster, but it also depends on your hardware. After that, we can check whether everything works correctly, for which it is necessary to connect to any host in the cluster and do the following:
root@k8s-m1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-m1 Ready master 28m v1.8.4+coreos.0
k8s-m2 Ready master 28m v1.8.4+coreos.0
k8s-m3 Ready master 28m v1.8.4+coreos.0
k8s-s1 Ready node 28m v1.8.4+coreos.0
k8s-s2 Ready node 28m v1.8.4+coreos.0
root@k8s-m1:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-2z6jz 1/1 Running 0 27m
kube-system calico-node-6d6q6 1/1 Running 0 27m
kube-system calico-node-96rgg 1/1 Running 0 27m
kube-system calico-node-nld9z 1/1 Running 0 27m
kube-system calico-node-pjcjs 1/1 Running 0 27m
kube-system kube-apiserver-k8s-m1 1/1 Running 0 27m
...
kube-system kube-proxy-k8s-s1 1/1 Running 0 26m
kube-system kube-proxy-k8s-s2 1/1 Running 0 27m
kube-system kube-scheduler-k8s-m1 1/1 Running 0 28m
kube-system kube-scheduler-k8s-m2 1/1 Running 0 28m
kube-system kube-scheduler-k8s-m3 1/1 Running 0 28m
kube-system kubedns-autoscaler-86c47697df-4p7b8 1/1 Running 0 26m
kube-system kubernetes-dashboard-85d88b455f-f5dm4 1/1 Running 0 26m
kube-system nginx-proxy-k8s-s1 1/1 Running 0 28m
kube-system nginx-proxy-k8s-s2 1/1 Running 0 28m
As you can see, by default, the kubernetes-dashboard web panel was immediately installed. Details on its operation can be found at the following link https://github.com/kubernetes/dashboard
Exclusively for basic verification, pour it under with two containers:
$ vim first-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: first-pod
spec:
containers:
- name: sise
image: mhausenblas/simpleservice:0.5.0
ports:
- containerPort: 9876
resources:
limits:
memory: "64Mi"
cpu: "500m"
- name: shell
image: centos:7
command:
- "bin/bash"
- "-c"
- "sleep 10000"
$ kubectl apply -f first-pod.yaml
pod "first-pod" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
first-pod 2/2 Running 0 16s
$ kubectl exec first-pod -c sise -i -t -- bash
[root@first-pod /]# curl localhost:9876/info
{"host": "localhost:9876", "version": "0.5.0", "from": "127.0.0.1"}
It was a test application in Python from the resource http://kubernetesbyexample.com/ .
Oddly enough, Docker 17.03.1-ce was installed as the containerization system, although the official documentation says that it is best to use version 1.13. The version of Docker that will be installed is described in roles / docker / defaults / main.yml and, theoretically, you can overwrite it in the configuration files above or pass the value with an argument.
Ansible Kubespray scripts also support scaling of cluster nodes. To do this, update inventory, in which we add a new node (worker):
$ vim my_inventory/inventory
k8s-m1.me ip=192.168.20.10
k8s-m2.me ip=192.168.20.11
k8s-m3.me ip=192.168.20.12
k8s-s1.me ip=192.168.20.13
k8s-s2.me ip=192.168.20.14
k8s-s3.me ip=192.168.20.15
[kube-master]
k8s-m1.me
k8s-m2.me
k8s-m3.me
[etcd]
k8s-m1.me
k8s-m2.me
k8s-m3.me
[kube-node]
k8s-s1.me
k8s-s2.me
k8s-s3.me
[k8s-cluster:children]
kube-node
kube-master
Of course, the k8s-s3.me node must also be configured accordingly, like the previous nodes. Now we can run cluster scaling:
$ ansible-playbook -i my_inventory/inventory scale.yml -b -v
According to the Kubespray documentation, you can use the preliminary procedure with cluster.yml for this, but with scale.yml it will take much less time. As a result, we can now observe the new node through kubectl:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-m1 Ready master 6h v1.8.4+coreos.0
k8s-m2 Ready master 6h v1.8.4+coreos.0
k8s-m3 Ready master 6h v1.8.4+coreos.0
k8s-s1 Ready node 6h v1.8.4+coreos.0
k8s-s2 Ready node 6h v1.8.4+coreos.0
k8s-s3 Ready node 19m v1.8.4+coreos.0
That's all. You can also read this article in Ukrainian at http://blog.ipeacocks.info/2017/12/kubernetes-part-iv-setup-ha-cluster.html
PS. It is better to write about all errors immediately in private - we will quickly fix it.
References
kubespray.io
github.com/kubernetes-incubator/kubespray
github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md
github.com/kubernetes-incubator/kubespray/blob/master/docsible .md
github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md
dickingwithdocker.com/2017/08/deploying-kubernetes-vms-kubespray
medium.com/@olegsmetanin/how-to-setupup -baremetal-kubernetes-cluster-with-kubespray-and-deploy-ingress-controller-with-170cdb5ac50d
github.com/kubernetes-incubator/kubespray
github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md
github.com/kubernetes-incubator/kubespray/blob/master/docsible .md
github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md
dickingwithdocker.com/2017/08/deploying-kubernetes-vms-kubespray
medium.com/@olegsmetanin/how-to-setupup -baremetal-kubernetes-cluster-with-kubespray-and-deploy-ingress-controller-with-170cdb5ac50d