Install Kubernetes 1.8 Bare Metal

Quite a lot of articles have been written on the Internet on installing Kubernetes, but most of them are based on kubeadm and minikube. Of course, it’s great that you can easily deploy a cluster in a couple of clicks, but I would like to have more understanding of what Kubernetes consists of. I will try to correct this situation with this guide.

Purpose: Kubernetes cluster with authorization by SSL keys and tokens.
Given: 2 virtual machines on Centos 7 (if desired, the manual easily adapts to other distributions)

c00test01 - master / minion - 10.10.12.101
c00test02 - minion - 10.10.12.102
Note: the installation was done with firewalld and selinux disabled.

1. Install Etcd


$ yum install etcd-3.1.9

Edit config

/etc/etcd/etcd.conf
ETCD_NAME=c00test01
ETCD_DATA_DIR=/var/lib/etcd
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://10.10.12.101:2380
ETCD_INITIAL_CLUSTER=c00test01=https://10.10.12.101:2380
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_INITIAL_CLUSTER_TOKEN=etcd-k8-cluster
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
ETCD_ADVERTISE_CLIENT_URLS=https://10.10.12.101:2379
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
#[proxy]
ETCD_PROXY="off"
#[security]
ETCD_CA_FILE=/etc/etcd/certs/ca.crt
ETCD_TRUSTED_CA_FILE=/etc/etcd/certs/ca.crt
ETCD_CERT_FILE=/etc/etcd/certs/server.crt
ETCD_KEY_FILE=/etc/etcd/certs/server.key
ETCD_CLIENT_CERT_AUTH=true
ETCD_PEER_CA_FILE=/etc/etcd/certs/ca.crt
ETCD_PEER_TRUSTED_CA_FILE=/etc/etcd/certs/ca.crt
ETCD_PEER_CERT_FILE=/etc/etcd/certs/peer.crt
ETCD_PEER_KEY_FILE=/etc/etcd/certs/peer.key
ETCD_PEER_CLIENT_CERT_AUTH=true

Replace the direction of the ExecStart unit in the init script:
$ mkdir /usr/lib/systemd/system/etcd.service.d

/usr/lib/systemd/system/etcd.service.d/etcd.conf
[Service]
ExecStart=
ExecStart=/usr/bin/etcd

$ chmod -R 644 /usr/lib/systemd/system/etcd.service.d
$ chown -R root:root /usr/lib/systemd/system/etcd.service.d
$ systemctl daemon-reload

Download easy-rsa:

$ mkdir /tmp/easyrsa
$ cd /tmp/easyrsa
$ curl -sSL -O https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
$ tar xzf easy-rsa.tar.gz
$ cd easy-rsa-master/easyrsa3

We generate self-signed certificates:

$ ./easyrsa --batch init-pki
# корневой
$ ./easyrsa --batch --req-cn="10.10.12.101" build-ca nopass
# серверный etcd
$ ./easyrsa --batch --subject-alt-name="IP:10.10.12.101 DNS:c00test01" build-server-full server nopass
# сертификат для peer-ов etcd
$ ./easyrsa --batch --subject-alt-name "IP:10.10.12.101 DNS:c00test01" build-server-full peer nopass
# клиентский для kube-apiserver и flannel
$ ./easyrsa --batch build-client-full client nopass

Copy to etcd certificate directory:

$ mkdir /etc/etcd/certs
$ cp -p pki/ca.crt /etc/etcd/certs/ca.crt
$ cp -p pki/issued/* /etc/etcd/certs/
$ cp -p pki/private/* /etc/etcd/certs/
$ chmod –R 440 /etc/etcd/certs/
# Приберемся
$ rm -rf /tmp/easyrsa/pki/

We start etcd:

$ systemctl enable etcd && systemctl start etcd

We load a network config in etcd:

/tmp/flannel-conf.json
{ 
  "Network": "172.96.0.0/12",
  "SubnetLen": 24,
  "Backend":
    {
      "Type": "vxlan"
    }
}

# 'cluster.lan' можно изменить на свое имя кластера
$ /usr/bin/etcdctl --cert-file=/etc/flanneld/certs/client.crt --key-file=/etc/flanneld/certs/client.key --ca-file=/etc/flanneld/certs/ca.crt --no-sync --peers=https://10.10.12.101:2379 set /cluster.lan/network/config < /tmp/flannel-conf.json

2. Install flannel


$ yum install flannel-0.7.1

Edit the config:

/ etc / sysconfig / flanneld
# 'cluster.lan' можно изменить на свое имя кластера
FLANNEL_ETCD="https://c00test01:2379"
FLANNEL_ETCD_ENDPOINTS="https://c00test01:2379"
FLANNEL_ETCD_KEY="/cluster.lan/network"
FLANNEL_ETCD_PREFIX="/cluster.lan/network"
FLANNEL_ETCD_CAFILE="/etc/flanneld/certs/ca.crt"
FLANNEL_ETCD_CERTFILE="/etc/flanneld/certs/client.crt"
FLANNEL_ETCD_KEYFILE="/etc/flanneld/certs/client.key"
FLANNEL_OPTIONS="-etcd-cafile /etc/flanneld/certs/ca.crt -etcd-certfile /etc/flanneld/certs/client.crt -etcd-keyfile /etc/flanneld/certs/client.key"

Copy the certificates:

$ mkdir –R /etc/flanneld/certs
$ cp /etc/etcd/certs/ca.pem /etc/flanneld/certs/ca.crt
$ cp /etc/etcd/certs/client.crt /etc/flanneld/certs/client.crt
$ cp /etc/etcd/certs/client.key /etc/flanneld/certs/client.key
$ chmod –R 440 /etc/flanneld/certs/

Starting flannel:

$ systemctl enable flanneld && systemctl start flannel

3. Install and run Docker


$ yum install docker-1.12.6
$ systemctl enable docker && systemctl start docker

We proceed to install / configure Kubernetes itself

4. Install apiserver


Install the ready-to-use RPM from here :

$ mkdir /tmp/k8s
$ cd /tmp/k8s
$ rpms=(kubernetes-master kubernetes-client kubernetes-node)
$ for i in ${rpms[*]}; do wget https://kojipkgs.fedoraproject.org/packages/kubernetes/1.8.1/1.fc28/x86_64/${i}-1.8.1-1.fc28.x86_64.rpm; done
$ yum install kubernetes-master-1.8.1-1.fc28.x86_64.rpm kubernetes-client-1.8.1-1.fc28.x86_64.rpm kubernetes-node-1.8.1-1.fc28.x86_64.rpm

We generate certificates:

$ cd /tmp/easyrsa
$ ./easyrsa --batch init-pki
# корневой
$ ./easyrsa --batch --req-cn="10.10.12.101" build-ca nopass
# серверный apiserver-а. в alt-names должны быть перечислены все ip и dns имена мастер-сервера
$ ./easyrsa --batch --subject-alt-name="IP:172.30.0.1 DNS:kubernetes DNS:kubernetes.default DNS:kubernetes.default.svc DNS:kubernetes.default.svc.cluster.lan IP:10.10.12.101 DNS:c00test01" build-server-full server nopass
# серверный kubelet. поскольку генерируем один сертификат на всех миньенов, в alt-names указываем все hostname серверов, которые будут миньенами
# этот сертификат нужен чтобы при командах 'kubectl log', 'kubectl exec' не было ошибок 'certificate signed by unknown authority'
$ ./easyrsa --batch --subject-alt-name "DNS:c00test01 DNS:c00test02" build-server-full apiserver-kubelet-client nopass
# клиентский kubelet и kubectl
$ ./easyrsa --batch build-client-full kubelet nopass
$ ./easyrsa --batch build-client-full kubectl nopass

Copy them to the kubernetes directory:

$ cp -p pki/ca.crt /etc/kubernetes/certs/ca.crt
$ cp -p pki/issued/* /etc/kubernetes/certs/
$ cp -p pki/private/* /etc/kubernetes/certs/
$ chown –R kube:kube /etc/kubernetes/certs/
$ chmod –R 440 /etc/kubernetes/certs/
# Приберемся
$ rm -rf /tmp/easyrsa/pki/

Copy the etcd certificates to the kubernetes directory:

$ mkdir /etc/kubernetes/certs/etcd
$ cd /etc/etcd/certs 
$ cp ca.crt /etc/kubecnetes/certs/etcd/ca.crt
$ cp client.crt /etc/kubecnetes/certs/etcd/client.crt
$ cp client.key /etc/kubecnetes/certs/etcd/client.key

Delete / etc / kubernetes / config since all connection settings will be specified in * .kubeconfig files

rm /etc/kubernetes/config

Edit the config:

/ etc / kubernetes / apiserver
KUBE_API_ADDRESS="--bind-address=0.0.0.0"
KUBE_API_PORT="--secure-port=6443"
# KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=https://c00test01:2379 --etcd-cafile=/etc/kubernetes/certs/etcd/ca.crt --etcd-certfile=/etc/kubernetes/certs/etcd/client.crt --etcd-keyfile=/etc/kubernetes/certs/etcd/client.key"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.30.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota"
KUBE_API_ARGS="--tls-cert-file=/etc/kubernetes/certs/server.crt \
--tls-private-key-file=/etc/kubernetes/certs/server.key \
--tls-ca-file=/etc/kubernetes/certs/ca.crt \
--client-ca-file=/etc/kubernetes/certs/ca.crt \
--kubelet-certificate-authority=/etc/kubernetes/certs/ca.crt \
--kubelet-client-certificate=/etc/kubernetes/certs/apiserver-kubelet-client.crt \
--kubelet-client-key=/etc/kubernetes/certs/apiserver-kubelet-client.key \
--token-auth-file=/etc/kubernetes/tokens/known_tokens.csv \
--service-account-key-file=/etc/kubernetes/certs/server.crt \
--bind-address=0.0.0.0 \
--insecure-port=0 \
--apiserver-count=1 \
--basic-auth-file=/etc/kubernetes/certs/basic.cnf \
--anonymous-auth=false \
--allow-privileged=true"

Create a directory for tokens and an empty file in it:

$ mkdir /etc/kubernetes/tokens
$ touch /etc/kubernetes/tokens/known_tokens.csv

We allow apiserver to “bind” privileged ports:

$ setcap cap_net_bind_service=ep /usr/bin/kube-apiserver

Create a file for basic auth:

$ touch /etc/kubernetes/certs/basic.cnf

Filled by the rule: username, password, id
id - an arbitrary unique number

For example:

/etc/kubernetes/certs/basic.cnf
admin,password,001
deploy,deploy,002

You can do without basic auth, but sometimes you have to use it, as in the case of deploying to a cluster using ansible .

We generate tokens. Please note that tokens will be the same for all kubelet and kube-proxy. To generate different, just add - <hostname> to the account name. (example: system: kubelet-c00test02)

generate_tokes.sh
#!/bin/bash
accounts=(system:controller_manager system:scheduler system:kubectl system:dns system:kubelet system:proxy)
for account in ${accounts[*]}; do
  token=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)
  echo "${token},${account},${account}" >> "/etc/kubernetes/tokens/known_tokens.csv"
  echo "${token}" > "/etc/kubernetes/tokens/${account}.token"
done

We start api-server:

$ systemctl enable kube-apiserver && systemctl start kube-apiserver

Configure controller-manager.

Edit configs:

/ etc / kubernetes / controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--service-account-private-key-file=/etc/kubernetes/certs/server.key \
--root-ca-file=/etc/kubernetes/certs/ca.crt "

/etc/kubernetes/controller-manager.kubeconfig
# 'cluster.lan' можно изменить на свое имя кластера
apiVersion: v1
kind: Config
current-context: controller-manager-to-cluster.lan
preferences: {}
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/certs/ca.crt
    server: https://c00test01:6443
  name: cluster.lan
contexts:
- context:
    cluster: cluster.lan
    user: controller-manager
  name: controller-manager-to-cluster.lan
users:
- name: controller-manager
  user:
# в token вписать содержимое /etc/kubernetes/tokens/system:controller-manager.token
    token: cW6ha9WHzTK9Y4psT9pMKcUqfr673ydF

We start controller-manager:

$ systemctl enable kube-controller-manager && systemctl start kube-controller-manager

Configure kube-scheduler.

Edit configs:

/ etc / kubernetes / scheduler
KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/scheduler.kubeconfig"


/etc/kubernetes/scheduler.kubeconfig
# 'cluster.lan' можно изменить на свое имя кластера
apiVersion: v1
kind: Config
current-context: scheduler-to-cluster.lan
preferences: {}
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/certs/ca.crt
    server: https://c00test01:6443
  name: cluster.lan
contexts:
- context:
    cluster: cluster.lan
    user: scheduler
  name: scheduler-to-cluster.lan
users:
- name: scheduler
  user:
# в token вписать содержимое /etc/kubernetes/tokens/system:scheduler.token
    token: A2cU20Q9MkzdK8ON6UnVaP1nusWNKrWT

We start kube-scheduler:

$ systemctl enable kube-scheduler && systemctl start kube-scheduler

Configure kubectl.

Edit the config:

/etc/kubernetes/kubectl.kubeconfig
# 'cluster.lan' можно изменить на свое имя кластера
apiVersion: v1
kind: Config
current-context: kubectl-to-cluster.lan
preferences: {}
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/certs/ca.crt
    server: https://c00test01:6443
  name: cluster.lan
contexts:
- context:
    cluster: cluster.lan
    user: kubectl
  name: kubectl-to-cluster.lan
users:
- name: kubectl
  user:
    client-certificate: /etc/kubernetes/certs/kubectl.crt
    client-key: /etc/kubernetes/certs/kubectl.key

For convenience, when calling kubectl, do not specify the location before the config with the –kubeconfig parameter, you can create the .kube directory in your user’s hamster and copy the kubectl config there by renaming it in config

For example:
/home/user1/.kube/config

Installing kube-dns.

Deploy kube-dns replication controller
$ cat </dev/null
        - --url=/healthz-dnsmasq
        - --cmd=nslookup kubernetes.default.svc.cluster.lan 127.0.0.1:10053 >/dev/null
        - --url=/healthz-kubedns
        - --port=8080
        - --quiet
        ports:
        - containerPort: 8080
          protocol: TCP
      dnsPolicy: Default
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      - key: CriticalAddonsOnly
        operator: Exists
EOF

Deploy kube-dns service
$ cat <

Установка мастера закончена. Однако посмотрев на статус pod-ов, мы увидим что kube-dns в статусе pending. Это потому что пока мы не настроили миньенов, в которые Kubernetes будет селить pod-ы.

Для миньенов необходимо выполнить действия пунктов 2 и 3 (установить flannel и docker).

5. Установка и настройка kubelet


На ноде c00test01 у нас уже все установлено, а вот на ноде c00test02 нужно установить пакеты:

$ mkdir /tmp/k8s
$ cd /tmp/k8s
$ rpms=(kubernetes-client kubernetes-node);for i in ${rpms[*]}; do wget https://kojipkgs.fedoraproject.org/packages/kubernetes/1.8.1/1.fc28/x86_64/${i}-1.8.1-1.fc28.x86_64.rpm; done
$ yum install kubernetes-client-1.8.1-1.fc28.x86_64.rpm kubernetes-node-1.8.1-1.fc28.x86_64.rpm

Так же нужно скопировать сертификаты и ключи с мастер-ноды в директорию /etc/kubernetes/certs:
kubelet.crt
kubelet.key
apiserver-kubelet-client.crt
apiserver-kubelet-client.key

Правим конфиги:

/etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
# KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=c00test02"
KUBELET_ARGS="--register-node=true \
--tls-cert-file=/etc/kubernetes/certs/apiserver-kubelet-client.crt \
--tls-private-key-file=/etc/kubernetes/certs/apiserver-kubelet-client.key \
--require-kubeconfig=true \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--pod-manifest-path=/etc/kubernetes/manifests \
--cgroup-driver=systemd \
--allow-privileged=true \
--cluster-domain=cluster.lan \
--authorization-mode=Webhook \
--fail-swap-on=false \
--cluster-dns=172.30.0.10"

Возьмем с мастера содержимое ca.crt:

$ base64 /etc/kubernetes/certs/ca.crt

/etc/kubernetes/kubelet.kubeconfig
apiVersion: v1
kind: Config
current-context: kubelet-to-cluster.lan # change 'cluster.lan to your cluster name'
preferences: {}
clusters:
- cluster:
    certificate-authority-data: <содержимое_ca.crt_в_base64>
    server: https://c00test01:6443
  name: cluster.lan # change 'cluster.lan to your cluster name'
contexts:
- context:
    cluster: cluster.lan # change 'cluster.lan to your cluster name'
    user: kubelet
  name: kubelet-to-cluster.lan # change 'cluster.lan to your cluster name'
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/certs/kubelet.crt
    client-key: /etc/kubernetes/certs/kubelet.key

/etc/kubernetes/proxy

KUBE_PROXY_ARGS=»--kubeconfig=/etc/kubernetes/proxy.kubeconfig --cluster-cidr=172.30.0.0/16"

/etc/kubernetes/proxy.kubeconfig
apiVersion: v1
kind: Config
current-context: proxy-to-cluster.lan # change 'cluster.lan to your cluster name'
preferences: {}
contexts:
- context:
    cluster: cluster.lan # change 'cluster.lan to your cluster name'
    user: proxy
  name: proxy-to-cluster.lan # change 'cluster.lan to your cluster name'
clusters:
- cluster:
    certificate-authority-data: <содержимое_ca.crt_в_base64>
    server: https://c00test01:6443
  name: cluster.lan # change 'cluster.lan to your cluster name'
users:
- name: proxy
  user:
    client-certificate: /etc/kubernetes/certs/kubelet.crt
    client-key: /etc/kubernetes/certs/kubelet.key

Стартуем kubelet и kube-proxy:

$ systemctl enable kubelet && systemctl enable kube-proxy
$ systemctl start kubelet && systemctl start kube-proxy

Установка закончена, можно проверить состояние нод и pod-а kube-dns.

$ kubectl get nodes
NAME          STATUS    ROLES     AGE       VERSION
c00test01     Ready     master    5m        v1.8.1
c00test02     Ready         4m        v1.8.1

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   kube-dns-v20-2jqsj                     3/3       Running   0          3m

При написании данного руководства были использованы следующие материалы:

Официальная документация
Kubernetes Ansible

Also popular now: