We deploy Kubernetes HA-cluster on Baremetal using Kubeadm and Keepalived (a simple guide)

Original author: kvaps
  • Transfer
  • Tutorial

This article is a free interpreter of the official guide for Creating Highly Available Clusters with kubeadm for Stacked control plane nodes . I do not like the complex language and the examples used in it, so I wrote my manual.


If you have any questions or something is unclear, consult the official documentation or ask Google . All stages are described here in the simplest and most restrained manner.


Input data


We have 3 nodes:


  • node1 (10.9.8.11)
  • node2 (10.9.8.12)
  • node3 (10.9.8.13)

We will make for them one fail-safe IP address:


  • 10.9.8.10

Then we install the etcd cluster and Kubernetes on them.


Balancer setting


First of all, we need to install Keepalived on all three nodes:


apt-get -y install keepalived

Now we will write the config /etc/keepalived/keepalived.conf:


vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 1
    priority 100
    advert_int 1
    authentication {
        auth_type AH
        auth_pass iech6peeBu6Thoo8xaih
    }
    virtual_ipaddress {
        10.9.8.10
    }
}

Activate and run Keepalived on all three nodes:


systemctl start keepalived
systemctl enable keepalived

Now we can verify that one of the nodes received an address 10.9.8.10on the interface eth0.


Expand Kubernetes Cluster


UPD: This article was written for v1.12 and despite the fact that it was adapted for v1.13 , now the procedure for deploying a cluster looks much simpler and more logical.
Look at this simpler guide .

at the moment everything can be much easier now


Make sure that the latest Kubernetes packages are installed on all nodes:


apt-get -y install kubeadm kubelet kubectl

Also stop the keepalive daemon on all nodes except the last .


systemctl stop keepalived

First node


Now we will generate configs for kubeadm (for each master node we need a separate config):


CLUSTER_IP=10.9.8.10
NODES=(node1 node2 node3)
IPS=(10.9.8.11 10.9.8.12 10.9.8.13)
POD_SUBNET="192.168.0.0/16"for i in"${!NODES[@]}"; do
  HOST=${IPS[$i]}
  NAME=${NODES[$i]}
  INITIAL_CLUSTER=$(
    for j in"${!NODES[@]}"; doecho"${NODES[$j]}=https://${IPS[$j]}:2380"done | xargs | tr ' ' ,
  )
cat > kubeadm-config-${NODES[$i]}.yaml <<EOT
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "${CLUSTER_IP}"
controlPlaneEndpoint: "${CLUSTER_IP}:6443"
etcd:
  local:
    extraArgs:
      initial-cluster: "${INITIAL_CLUSTER}"
      initial-cluster-state: new
      name: ${NODES[$i]}
      listen-peer-urls: "https://${IPS[$i]}:2380"
      listen-client-urls: "https://127.0.0.1:2379,https://${IPS[$i]}:2379"
      advertise-client-urls: "https://${IPS[$i]}:2379"
      initial-advertise-peer-urls: "https://${IPS[$i]}:2380"
    serverCertSANs:
      - "${NODES[$i]}"
      - "${IPS[$i]}"
    peerCertSANs:
      - "${NODES[$i]}"
      - "${IPS[$i]}"
networking:
    podSubnet: "${POD_SUBNET}"
EOT
done

We initiate etcd on the first node, generate certificates and admin-config


kubeadm="kubeadm --config=kubeadm-config-${HOSTNAME}.yaml"$kubeadm init phase preflight
$kubeadm init phase certs all
$kubeadm init phase kubelet-start 
$kubeadm init phase kubeconfig kubelet
$kubeadm init phase etcd local$kubeadm init phase kubeconfig admin
systemctl start kubelet

Copy the generated certificates and kubeadm configs to the rest of the control plane nodes.


NODES="node2 node3"
CERTS=$(find /etc/kubernetes/pki/ -maxdepth 1 -name '*ca.*' -o -name '*sa.*')
ETCD_CERTS=$(find /etc/kubernetes/pki/etcd/ -maxdepth 1 -name '*ca.*')
for NODE in$NODES; do
  ssh $NODE mkdir -p /etc/kubernetes/pki/etcd
  scp $CERTS$NODE:/etc/kubernetes/pki/
  scp $ETCD_CERTS$NODE:/etc/kubernetes/pki/etcd/
  scp /etc/kubernetes/admin.conf $NODE:/etc/kubernetes
  scp kubeadm-config-$NODE.yaml $NODE:
done

Second node


We initiate etcd on the second node:


kubeadm="kubeadm --config=kubeadm-config-${HOSTNAME}.yaml"$kubeadm init phase preflight
$kubeadm init phase certs all
$kubeadm init phase kubelet-start 
$kubeadm init phase kubeconfig kubelet
$kubeadm init phase etcd local
systemctl start kubelet

Third node


We initiate Kubernetes master with etcd on the last node.


(make sure the balancer IP is installed and points to this node)


kubeadm init --config kubeadm-config-${HOSTNAME}.yaml

The first and second nodes


Now we can initiate the Kubernetes master on the first two nodes:


kubeadm="kubeadm --config=kubeadm-config-${HOSTNAME}.yaml"$kubeadm init phase kubeconfig all
$kubeadm init phase control-plane all
$kubeadm init phase mark-control-plane
$kubeadm init phase upload-config kubeadm

And also run the Keepalived daemon:


systemctl start keepalived

Also popular now: