Kubernetes Handbook Part 2: Creating and Working With a Cluster

https://medium.freecodecamp.org/learn-kubernetes-in-under-3-hours-a-detailed-guide-to-orchestrating-containers-114ff420e882
  • Transfer
The last time we looked at two approaches to dealing with mikroservisami. In particular, one of them involves the use of Docker containers, in which you can run microservice code and auxiliary programs. Today, using images of containers that we already have, we will work with Kubernetes.



Meet Kubernetes


I promise, and at the same time I do not exaggerate at all that when you finish reading this article, ask yourself: “Why don't you call Kubernetes Supernetes?”.


Supernetes

If you read the previous part of this material, then you know that there we have analyzed a lot of things related to preparing applications for containerization and working with Docker containers. It may seem to you that the most difficult thing is waiting for you now, but, in fact, what we are going to talk about here is much simpler than what we have already figured out. The only reason for which the study of Kubernetes may seem to someone a very difficult task is the amount of additional information that you need to have in order to understand Kubernetes and to use this system effectively. We have already discussed all the “additional information” necessary for the successful development of Kubernetes.

▍What is Kubernetes?


In the first part of this material, after running microservices in containers, you were asked to think about the issue of scaling containerized applications.
I propose to reflect on it together in the format of questions and answers:

Question: How does containerized applications scale?
Answer: Launch additional containers.

Question: How do they distribute the load between them? What if some server is already used to the maximum, and the container needs to be deployed on another server? How to find the most efficient way to use hardware?
Answer: So ... I will look on the Internet ...

Question:How to update the program without affecting the performance of the system? And, if the update contains an error, how to return to the working version of the application?

In fact, it is Kubernetes technology that provides decent answers to these and many other questions. I will try to shrink the definition of Kubernetes to one sentence: “Kubernetes is a container management system that abstracts the basic infrastructure (the environment in which containers run).”

I believe that now you are not particularly clear about the concept of "container management", although we have already mentioned this. Below we consider this technology in practice. However, the concept of “abstraction of basic infrastructure” is encountered for the first time. Therefore, now we consider it.

Б Abstraction of basic infrastructure


Kubernetes allows applications to abstract from the infrastructure, giving us a simple API to which you can send requests. Kubernetes tries to fulfill these requests using all its capabilities. For example, in ordinary language, a similar query can be described as: “Kubernetes, expand 4 containers of image X”. After receiving the command, Kubernetes will find not too loaded nodes (they are also called “nodes” - from the English “node”), on which you can deploy new containers.


API Server Request

What does this mean for a developer? This means that it does not need to worry about the number of nodes, about exactly where the containers are started, how they interact. It does not have to deal with hardware optimization or worry about nodes whose operation can be broken (and something like that, according to Murphy's law, is bound to happen), since, if necessary, new nodes can be added to the Kubernetes cluster. If there is something wrong with some existing nodes, Kubernetes will deploy containers on those nodes that are still in a healthy state.

Much of what is shown in the previous figure, you are already familiar. But there is something new there:

  • API Server. Making calls to this server is the only way to interact with the cluster, which we have, whether we are talking about starting or stopping containers, checking the state of the system, working with logs, or performing other actions.
  • Kubelet. This is the agent that monitors the containers inside the node and interacts with the main node.

Please note that in a couple of previous sentences we use the term "container", but here it would be better to use the term "pod". These entities in Russian-language publications are often called "podami", and sometimes - "pods", in the documentation , clarifying the concept of "pod", refer to "a flock of whales" (pod of whales) or "pea pod" (pea pod) , but no one calls them "flocks" or "pods". We, speaking of them, will use the word "under." Now you can quite consider them as containers, we will talk about feeds in more detail below.

We will dwell on this, as we can talk about all this further, and, moreover, there are a lot of good materials concerning the theory of Kubernetes. For example, this is official documentation, although it is not easy to read, or books like this.

▍Standardization of work with cloud service providers


Another strength of Kubernetes is that this technology contributes to the standardization of work with cloud service providers (Cloud Service Provider, CSP). This is a bold statement. Consider the following example. A specialist who knows Azure well or the Google Cloud Platform has to work on a project designed for a completely new cloud environment with which he is unfamiliar. In such a situation, much can go wrong. For example, the deadlines for project delivery may be disrupted, the project company may need to rent more cloud resources than planned, and so on.

When using Kubernetes, this problem simply cannot arise, since, regardless of what kind of cloud service provider we are talking about, working with Kubernetes always looks the same. The developer, in a declarative style, tells the API server what he needs, and Kubernetes works with the system resources, allowing the developer to abstract from the details of the implementation of this system.

Stay a little bit on this idea, as this is a very powerful Kubernetes opportunity. For companies, this means that their solutions are not tied to a particular CSP. If a company finds a more profitable offer on the cloud service market, it can freely use this offer by moving to a new provider. At the same time, the experience gained by the specialists of the company is not lost anywhere.

Now let's talk about the practical use of Kubernetes

Practice working with Kubernetes: pods


We set up the launch of microservices in containers, the setup process was rather tedious, but we managed to get to a working system. In addition, as already mentioned, our solution does not scale well and is not resistant to failures. We will solve these problems with the help of Kubernetes. Next we bring our system to the form corresponding to the following scheme. Namely, the containers will be managed by Kubernetes.


Microservices are working in a cluster managed by Kubernetes.

Here we will use Minikube to locally deploy the cluster and test the capabilities of Kubernetes, although everything we are doing here can be done using cloud platform tools such as Azure or Google Cloud Platform.

▍Minikube installation and launch


Follow the directions in the documentation to install Minikube . During the installation of Minikube, you also install Kubectl. This is a client that allows you to make requests to the Kubernetes API server.

To run Minikube, execute the command minikube start, and after it has completed, execute the command kubectl get nodes. As a result, you should see something like the following:

kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     <none>    11m       v1.9.0

Minikube puts at our disposal a cluster that consists of only one node. True, it suits us perfectly. Those who work with Kubernetes do not need to worry about how many nodes are present in the cluster, since Kubernetes allows you to abstract from such details.

Now let's talk about the sub.

▍Fets


I really like containers, and you probably also like them now. Why, then, Kubernetes suggests that we use sweats, entities that are minimally deployable computing units in this system? What functions does it perform under? The point is that the composition of the hearth can include one or more containers that share the same execution environment.

But is it necessary to carry out, for example, two containers in one bag? How to say ... Usually, there is only one container per one, and that’s what we are going to do. But for those cases when, for example, two containers need common access to the same data storage, or if they are connected with the use of interprocess communication techniques, or if they are closely related for some other reason, all this can be done by running them in one hearth. Another possibility that pitches differ in is that they do not have to use Docker containers. If necessary, other application containerization technologies can be applied here, for example, Rkt .

The following diagram shows the numbered properties of the hearth.


Properties of the pods

Consider these properties.

  1. Each hearth in the Kubernetes cluster has a unique IP address.
  2. In the hearth can contain many containers. They share the available port numbers, that is, for example, they can communicate with each other through localhost(naturally, they cannot use the same ports). Interaction with containers located in other sub-fields is organized using the IP addresses of these subfields.
  3. The containers in the sub-sites share the data storage volumes, the IP address, the port numbers, the IPC namespace.

It should be noted that containers have their own isolated file systems, but they can share data using the Kubernetes resource called Volume .

We have what is already said about the subframes, enough to continue to master the Kubernetes. You can read more about them here .

▍Description


Below is the manifest file for the application sa-frontend.

apiVersion: v1
kind: Pod                                            # 1
metadata:
  name: sa-frontend                                  # 2
spec:                                                # 3
  containers:
    - image: rinormaloku/sentiment-analysis-frontend # 4
      name: sa-frontend                              # 5
      ports:
        - containerPort: 80

Let us explain some of the parameters specified in it.

  1. Kind: sets the type of Kubernetes resource we want to create. In our case it is Pod.
  2. Name: resource name. We called it sa-frontend.
  3. Spec: an object that describes the desired state of the resource. The most important property here is an array of containers.
  4. Image: image of the container that we want to run in this pod.
  5. Name: unique name for the container located in the pod.
  6. ContainerPort: The port that is listening on the container. This parameter can be considered an indication for who reads this file (if you omit this parameter, it will not restrict access to the port).

▍Creating a SA-Frontend Bottom


File description of the hearth, which we talked about, can be found at resource-manifests/sa-frontend-pod.yaml. In this folder, you must either go through the terminal, or, when calling the appropriate command, specify the full path to the file. Here is this command and an example of the system's reaction to it:

kubectl create -f sa-frontend-pod.yaml
pod "sa-frontend" created

To find out if the sub works, run the following command:

kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
sa-frontend                   1/1       Running   0          7s

If the status of the poda when executing this command is ContainerCreating, then you can run the same command with the key --watch. Due to this, when going into the state Running, information about it will be displayed automatically.

▍Access to the application from outside


In order to organize access to the application from the outside, it will be correct to create the resource Kubernetes of the form Service, which we will discuss below, but here we will use a simple port forwarding for the sake of brevity:

kubectl port-forward sa-frontend 88:80
Forwarding from 127.0.0.1:88 -> 80

If you now go to the address using the browser 127.0.0.1:88, you can see the React-application page.

▍ Wrong approach to scaling


We have already said that one of the opportunities Kubernetes is scaling applications. In order to experience this opportunity, run another one under. Create a description of another resource Podby placing the sa-frontend-pod2.yamlfollowing code in the file :

apiVersion: v1
kind: Pod                                            
metadata:
  name: sa-frontend2      # Единственное изменение
spec:                                                
  containers:
    - image: rinormaloku/sentiment-analysis-frontend 
      name: sa-frontend                              
      ports:
        - containerPort: 80

As you can see, if we compare this description with what we considered above, the only change in it is the value of the property Name.

Create a new under:

kubectl create -f sa-frontend-pod2.yaml
pod "sa-frontend2" created

Make sure that it is running:

kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
sa-frontend                   1/1       Running   0          7s
sa-frontend2                  1/1       Running   0          7s

Now we have two pod! True, there is nothing special to be happy about. Please note that the solution to the problem of scaling an application shown here has many drawbacks. How to do it right, we will talk in the section dedicated to another resource Kubernetes, which is called Deployment (deployment).

Now let's consider what we did after launching two identical pods. Namely, the Nginx web server is now running in two different hearths. In this regard, we can ask two questions:

  1. How to give access to these servers from the outside, by URL?
  2. How to organize load balancing between them?


Wrong approach to scaling

Among the tools Kubernetes are resources of the form Service (service). Let's talk about them.

Practice working with Kubernetes: services


Kubernetes services play the role of access points to sets of pods, which provide the same functionality as these pods. Services perform the solution of difficult tasks to work with the hearths and load balancing between them.


The Kubernetes service serves IP addresses.

In our Kubernetes cluster there will be services that implement various functions. This is a front-end application, a Spring web application and a Flask application written in Python. This raises the question of how the service should understand which subframes it needs to work with, that is, how to find out on the basis of what information the system should generate a list of endpoints for the pods.

This is done using another Kubernetes abstraction called Label. Work with tags consists of two stages:

  1. The purpose of the label will be the one with which the service should work.
  2. Application to the service of "selector", which determines the fact with which particular trays, which are assigned tags, the service will work.

Perhaps it is easier to present in the form of an illustration than to describe.


Pods with labels and their manifest files.

We see here two hearths, which, using the construction, areapp: sa-frontendassigned the same labels. Service interested pods with such tags.

▍Tags


Labels give the developer a simple way to organize Kubernetes resources. They are key-value pairs, you can assign them to any resources. Modify the description files of the frontend applets and bring them to the form shown in the previous figure. After that, save these files and run the following commands:

kubectl apply -f sa-frontend-pod.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod "sa-frontend" configured
kubectl apply -f sa-frontend-pod2.yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod "sa-frontend2" configured

When executing these commands, the system will issue a warning (it does not suit what we use applyinstead create, we understand it), but after the warning, reports that the corresponding heartbeats are configured. To check whether tags have been assigned, we can, by filtering the tags, the information about which we want to display:

kubectl get pod -l app=sa-frontend
NAME           READY     STATUS    RESTARTS   AGE
sa-frontend    1/1       Running   0          2h
sa-frontend2   1/1       Running   0          2h

Another way to verify that the tags were actually assigned is to attach a key to the previous command --show-labels. Thanks to this information about their tags will be included in the list of information about the subfields.

Now I’ll assign the tags assigned and we are ready to set up the service to work with them. Therefore, let us take a description of the service type LoadBalancer.


Load balancing using a LoadBalancer type service

▍Description of service


Here is the YAML description of the service type LoadBalancer:

apiVersion: v1
kind: Service              # 1
metadata:
  name: sa-frontend-lb
spec:
  type: LoadBalancer       # 2
  ports:
  - port: 80               # 3
    protocol: TCP          # 4
    targetPort: 80         # 5
  selector:                # 6
    app: sa-frontend       # 7

We explain this text:

  1. Kind: we create a service resource Service.
  2. Type: type of resource indicated in its specification. We chose the type LoadBalancer, because with this service we want to solve the problem of load balancing between the bottoms.
  3. Port: port on which the service accepts requests.
  4. Protocol: protocol used by the service.
  5. TargetPort: The port to which incoming requests are redirected.
  6. Selector: an object containing information about which subsection the service should work with.
  7. app: sa-frontend: this property indicates which service will work with which subs. Namely - it pods, which is assigned a label app: sa-frontend.

In order to create a service, run the following command:

kubectl create -f service-sa-frontend-lb.yaml
service "sa-frontend-lb" created

You can check the status of the service as follows:

kubectl get svc
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
sa-frontend-lb   LoadBalancer   10.101.244.40   <pending>     80:30708/TCP   7m

Here you can see that the property EXTERNAL-IPis in a state <pending>, and you can not wait for its change. This happens due to the fact that we use Minikube. If we created such a service, working with a certain cloud service provider, like Azure or the Google Cloud Platform, then the service would have a public IP address that would enable it to access it from the Internet.

Despite this, Minikube will not allow us to idle, giving us a useful command to debug the system locally:

minikube service sa-frontend-lb
Opening kubernetes service default/sa-frontend-lb in default browser...

Thanks to this command, a browser will be launched that will access the service. After the service receives the request, it will redirect it to one of the pods (it doesn't matter which one it will be under). This abstraction allows us to perceive the group of pods as a single entity and work with them, using the service as a single access point to them.

In this section, we talked about how to assign tags to resources, how to use them when setting up services as selectors. Here we have described and created a type of service LoadBalancer. Thanks to this, we solved the task of scaling the application (scaling is to add new pods with corresponding labels to the cluster) and to organize load balancing between the pods using the service as an entry point.

Practice working with Kubernetes: deployment


Deployment is an abstraction of Kubernetes that allows us to manage what is always present in the application life cycle. This is a change management application. Applications that do not change are, so to speak, "dead" applications. If the application "lives", then you may be faced with the fact that the requirements for it periodically change, its code expands, this code is packaged and deployed. In this case, at each step of this process errors may occur.

Resource type Deployment allows you to automate the transition from one version of the application to another. This is done without interrupting the system, and if an error occurs during this process, we will be able to quickly return to the previous, working version of the application.

▍Use of deployments


Now the cluster has two hearths and a service that gives access to them from the outside and balances the load on them.


The current state of the cluster

We said that the launch of two different pods with the same functionality is not the best idea. When using such a scheme, we have to work with each hearth on an individual basis, creating, updating, deleting each specific one, observing its condition. With this approach, one does not have to talk about a quick system update or a quick rollback of an unsuccessful update. We are not satisfied with this state of affairs, therefore we are going to resort to the possibility of the Deployment resource, which is aimed at solving the above mentioned problems.

Before we continue our work, let's formulate its goals, which will give us guidelines that will be useful when parsing the deployment manifest file. So, this is what we need:

  1. We want to be able to create two pods based on one container rinormaloku/sentiment-analysis-frontend.
  2. We need an application deployment system that allows it, when it is updated, to work without interruption.
  3. We want to submit a label app: sa-frontendthat will allow us to detect the service sa-frontend-lb.

Now we will express these requirements as a description of the Deployment resource.

▍Description of deployment


Here is the YAML description of the Deployment type resource, which was created taking into account the above described system requirements:

apiVersion: extensions/v1beta1
kind: Deployment                                          # 1
metadata:
  name: sa-frontend
spec:
  replicas: 2                                             # 2
  minReadySeconds: 15
  strategy:
    type: RollingUpdate                                   # 3
    rollingUpdate: 
      maxUnavailable: 1                                   # 4
      maxSurge: 1                                         # 5
  template:                                               # 6
    metadata:
      labels:
        app: sa-frontend                                  # 7
    spec:
      containers:
        - image: rinormaloku/sentiment-analysis-frontend
          imagePullPolicy: Always                         # 8
          name: sa-frontend
          ports:
            - containerPort: 80

Let's sort this description:

  1. Kind: it says here that we are describing a view resource Deployment.
  2. Replicas: property of the deployment specification object, which specifies how many instances (replicas) of the subtops need to be run.
  3. Type: describes the strategy used in this deployment when upgrading from the current version to the new one. The strategy RollingUpdateprovides zero system downtime when upgrading.
  4. MaxUnavailable: This is the property of the object RollingUpdatethat specifies the maximum number of inaccessible sweeps (compared to the desired number of sweeps) when performing a sequential system update. In our deployment, which implies the presence of 2 replicas, the value of this property indicates that after the completion of one hearth, another one will be executed, which makes the application available during the update.
  5. MaxSurge: is a property of the object RollingUpdatethat describes the maximum number of hearths that can be added to the deployment (as compared to the given number of hearths). In our case, its value, 1, means that, when upgrading to a new version of the program, we can add one more to the cluster, which will lead to the fact that we can simultaneously run up to three subs.
  6. Template: This object sets the pod template that the resource Deploymentbeing described will use to create new pods. This setting will probably seem familiar to you.
  7. app: sa-frontend: Label for pods created according to a given pattern.
  8. ImagePullPolicy: determines the order of working with images. In our case, this property is set to a value Always, that is, during each deployment, the corresponding image will be loaded from the repository.

Having examined all this, let's move on to practice. Run the deployment:

kubectl apply -f sa-frontend-deployment.yaml
deployment "sa-frontend" created

Check the system status:

kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
sa-frontend                    1/1       Running   0          2d
sa-frontend-5d5987746c-ml6m4   1/1       Running   0          1m
sa-frontend-5d5987746c-mzsgg   1/1       Running   0          1m
sa-frontend2                   1/1       Running   0          2d

As you can see, now we have 4 days. Two of them were created using the Deployment resource, two more are those that we created ourselves. Now you can remove those trails that we created ourselves, using the commands of the following form:

kubectl delete pod <pod-name>

By the way, here is your task for independent work. Remove one of the pods created using the Deployment resource and watch the system. Think about the reasons for what is happening before reading on.

When you remove one sub-resource, the Deployment resource learns that the current state of the system (1 sub) is different from the desired one (2 sub-fields), so another sub flow is launched.

What is the benefit of Deployment resources, besides the fact that when using them the system is maintained in the right state? Consider the strengths of these resources.

▍ Execute deployment with zero system downtime


Suppose a product manager comes to us and reports that the client for whom we have created this product wants a green button in the client application. Developers implement this requirement and pass us the only thing we need from them - the image container, which is called rinormaloku/sentiment-analysis-frontend:green. Now comes our time. We, the DevOps team, need to perform the deployment of the updated system and ensure zero downtime. Now let's see if the efforts to master and configure the Deployment resource are justified.

Edit the file sa-frontend-deployment.yaml, replacing the image container name with a new one, with rinormaloku/sentiment-analysis-frontend:greenthen save the file under the name sa-frontend-deployment-green.yamland execute the following command:

kubectl apply -f sa-frontend-deployment-green.yaml --record
deployment "sa-frontend" configured

Check the system status with the following command:

kubectl rollout status deployment sa-frontend
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 of 2 updated replicas are available...
deployment "sa-frontend" successfully rolled out

In accordance with the data output in response to this command, we can conclude that the deployment of the update was successful. During the upgrade, the old replicas, one by one, were replaced with new ones. This means that our application has always been available during the update process. Before we continue, let's make sure that the applications are really updated.

Deployment Check


In order to take a look at how the application looks in the browser, use the command you already know:

minikube service sa-frontend-lb

In response, the browser will be launched, and the application page will open in it.


Green button

As you can see, the button really turned green, which means that the system was actually updated.

Behind the system update system for RollingUpdate


After we executed the command kubectl apply -f sa-frontend-deployment-green.yaml --record, Kubernetes compared the state of the system to which we strive to arrive, with its current state. In our case, to move to a new state, it is necessary that there would be two hearths in the cluster based on the images rinormaloku/sentiment-analysis-frontend:green. Since this is different from the state in which the system resides, the update operation is launched.


Replacing the hearth during the system update

The mechanismRollingUpdateoperates in accordance with the rules we have set, namely, we are talking about the parametersmaxUnavailable: 1andmaxSurge: 1. This means that the Deployment resource can, if there are two working sub-fields, stop one of them, or start another one under. This process, shown in the previous figure, is repeated until all the old boards are replaced with new ones.

Now let's talk about another strong side of Deployment resources. In order to make it more interesting, add drama to the story. Here's a story about a production error.

▍ Roll back to previous system status


A product manager, burning with excitement, flies into the office. “Bug! In production! Return everything as it was! ”He shouts. But his concern does not infect you with unhealthy enthusiasm. You, without losing composure, open the terminal and enter the following command:

kubectl rollout history deployment sa-frontend
deployments "sa-frontend"
REVISION  CHANGE-CAUSE
1         <none>         
2         kubectl.exe apply --filename=sa-frontend-deployment-green.yaml --record=true

You look at the previously performed deployments and ask the manager: “So, the latest version fails, but did the previous one work fine?”.

"Yes. Have you not heard me? ”, The manager continues to cry.

You, not paying attention to his next attempt to disturb you, just enter the following in the terminal:

kubectl rollout undo deployment sa-frontend --to-revision=1
deployment "sa-frontend" rolled back

After that you open the application page. The green button disappeared, and with it the errors.

The manager stiffens with his jaw hanging in surprise.

You just saved the company from disaster.

A curtain!

In fact, it was boring. Before the existence of Kubernetes in such stories, there were much more unexpected plot twists, more action, and they did not end so quickly. Ah, the good old days!

Most of the above teams and the results of their work speak for themselves. Perhaps incomprehensible there can be only one detail. Why CHANGE-CAUSEdoes the first revision matter <none>, and the second - kubectl.exe apply –filename=sa-frontend-deployment-green.yaml –record=true?

If you assume that the reason for the appearance of such information was the use of the flag -recordwhen you deploy a new version of the application, you will be absolutely right.

In the next section, we will use all that we have already learned in order to enter a fully working application.

Practice of working with Kubernetes: sharing the mechanisms studied


We have already dealt with the resources of Kubernetes that we need in order to build a full-fledged cluster application. The following image highlights everything that we still need to do.


The current state of the application

Let's start working from the bottom of this diagram.

▍Downloading sa-logic


Navigate to the project folder using the terminal resource-manifestsand run the following command:

kubectl apply -f sa-logic-deployment.yaml --record
deployment "sa-logic" created

Deployment sa-logiccreates three pod. They run Python application containers. They are assigned tags app: sa-logic. This allows us to work with them using the service sa-logicusing the appropriate selector. Open the file sa-logic-deployment.yamland read its contents.

In general, you will not find anything new for yourself there, so let's deal with the next resource - the service sa-logic.

SaSa-logic service


Let's think about why we need this resource of the Service type. The fact is that our Java application, which will run in the tagged sub fields sa-webapp, depends on the text analysis capabilities of the Python application. But now, unlike a situation in which everything works on a local machine, we do not have a single Python application listening on a certain port. We have several pods, the number of which, if necessary, can be increased.

That is why we need a service that, as we have said, acts as an access point for entities that implement the same features. This means that we can use the service sa-logicas an abstraction, making it possible to work with all the hearths sa-logic.

Run the following command:

kubectl apply -f service-sa-logic.yaml
service "sa-logic" created

Now let's look at how the state of the application has changed after the execution of this command.


Changed Application State

Now the servicesa-logicallows, from the podssa-webapp, to work with a set of pods that implement text analysis functionality.

Let's turn the tablessa-webapp.

▍Download sa-webapp


We have already done deployments more than once, but in this case you can find something new in the description file of the corresponding resource Deployment. So, if you look into the file sa-web-app-deployment.yaml, there you can pay attention to the following:

- image: rinormaloku/sentiment-analysis-web-app
  imagePullPolicy: Always
  name: sa-web-app
  env:
    - name: SA_LOGIC_API_URL
      value: "http://sa-logic"
  ports:
    - containerPort: 8080

What role does the property play env? It can be assumed that it declares, inside the hearth, an environment variable SA_LOGIC_API_URLwith a value http://sa-logic. If this is the case, then it would be good to understand why the value of the variable contains such an unusual address. What does he point to?

In order to answer this question we need to get acquainted with the concept of kube-dns.

▍DNS server cluster Kubernetes


In Kubernetes there is a special sub, which is called kube-dns. By default, all accounts use it as a DNS server. One of the important features kube-dnsis the fact that this one creates a DNS record for each cluster service.

This means that when we create a service sa-logic, it is assigned an IP address. An kube-dnsentry is made with the information about the name and IP address of the service. This allows everyone to convert a view address http://sa-logicto an IP address.

Now let's continue working with the Deployment resource sa-webapp.

▍Download sa-webapp


Run the following command:

kubectl apply -f sa-web-app-deployment.yaml --record
deployment "sa-web-app" created

Now we only have to provide access to the sub-sites sa-webappwith the help of a load balancing service. This will allow the React application to make requests to the service, which is the access point to the slots sa-webapp.

SaSa-webapp service


If you open the file service-sa-web-app-lb.yaml, you will realize that everything that can be seen there has already been encountered. Therefore, without unnecessary explanations, execute the following command:

kubectl apply -f service-sa-web-app-lb.yaml
service "sa-web-app-lb" created

Now the cluster is fully ready. But for everything to be completely good, we need to solve another problem. So, when we deployed the platforms sa-frontend, the containerized application was designed to access the Java application sa-webappat the address http://localhost:8080/sentiment. Now we need to make it address the load balancer, the service sa-webapp, which will ensure the interaction of the React application with the platforms in which the Java application instances are running.

Correcting this shortcoming will give us the opportunity to quickly run over all that we have been studying here. By the way, if you want to extract the maximum efficiency from the study of this material - you can, without reading further, try to correct this shortcoming yourself.

As a matter of fact, here’s what a step by step solution of this problem looks like:

  1. Find out the IP address of the load balancer sa-webappby running the following command:

    minikube service list
    |-------------|----------------------|-----------------------------|
    |  NAMESPACE  | NAME         | URL       |
    |-------------|----------------------|-----------------------------|
    | default     | kubernetes         | No node port       |
    | default     | sa-frontend-lb       | http://192.168.99.100:30708 |
    | default     | sa-logic         | No node port       |
    | default     | sa-web-app-lb        | http://192.168.99.100:31691 |
    | kube-system | kube-dns             | No node port |
    | kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
    |-------------|----------------------|-----------------------------|
  2. Use the found IP address in the file sa-frontend/src/App.js. Here is a fragment of the file in which we make changes:

    analyzeSentence() {
            fetch('http://192.168.99.100:31691/sentiment', { /* убрано ради краткости */})
                .then(response => response.json())
                .then(data => this.setState(data));
        }
  3. We will assemble the React-application by going to the folder using the terminal sa-frontendand executing the command npm run build.
  4. We collect the image of the container:

    docker build -f Dockerfile -t $DOCKER_USER_ID/sentiment-analysis-frontend:minikube.
  5. Send the image to the Docker Hub repository:

    docker push $DOCKER_USER_ID/sentiment-analysis-frontend:minikube
  6. Edit the file sa-frontend-deployment.yaml, adding information about the new image.
  7. Run the following command:

    kubectl apply -f sa-frontend-deployment.yaml

Now you can refresh the application page opened in the browser, or, if you have already closed the browser window, you can execute the command minikube service sa-frontend-lb. Test the system by trying to analyze a sentence.


Ready Cluster Application

Results


Using Kubernetes technology can benefit development teams, it has a beneficial effect on the work on various projects, simplifies the deployment of applications, helps them to scale, makes them resilient to failures. Thanks to Kubernetes, you can use the resources provided by a variety of cloud-based providers without being dependent on the decisions of specific cloud service providers. Therefore, I propose to rename Kubernetes to Supernetes.

Here is what you learned by mastering this material:

  • Building, packaging and running applications based on React, Java and Python.
  • Working with Docker containers, namely, their description and assembly using the file Dockerfile.
  • Work with container repositories, in particular, with Docker Hub.

In addition, you have mastered the most important concepts of Kubernetes:

  • Pods
  • Services
  • Deployment
  • Important concepts like updating applications without interrupting the system
  • Application scaling

In the course of work, we turned the application consisting of microservices into a cluster Kubernetes.

Dear readers! Do you use Kubernetes?


Also popular now: