Kubernetes tutorial, part 1: applications, microservices and containers
- Transfer
- Tutorial
At our request, Habr created the Kubernetes hub and we are pleased to place the first publication in it. Subscribe!
Kubernetes is easy. Why do banks pay me for working in this area a lot of money, while anyone can master this technology in just a few hours?

If you doubt that Kubernetes can be explored so quickly - I suggest you try to do it yourself. Namely, having mastered this material, you will be able to launch an application based on microservices in the Kubernetes cluster. I can guarantee this, since it is exactly according to the method used here that I teach our clients to work with Kubernetes. What makes this guide different from others? In fact - a lot of things. So, most of these materials begin with an explanation of simple things — Kubernetes concepts and features of the kubectl command. The authors of these materials assume that their reader is familiar with application development, with microservices and with Docker containers. We will go the other way. First, we will talk about how to run an application based on microservices on a computer. Then we will consider assembly of images of containers for each microservice. And after that we'll get to know Kubernetes and analyze the deployment of an application based on microservices in a cluster managed by Kubernetes.
Such an approach, with a gradual approach to Kubernetes, will give a depth of understanding of what is happening, which is necessary for an ordinary person in order to understand how simple everything is in Kubernetes. Kubernetes is, of course, a simple technology, provided that those who want to master it know where and how it is used.
Now, without further ado, let's get to work and talk about the application with which we will work.
Experimental application
Our application will perform only one function. It accepts, as input, one sentence, after which, using text analysis tools, it performs a sentiment analysis of this sentence, receiving an assessment of the author’s emotional attitude towards an object.
Here is the main window of this application.

A web application for analyzing the tonality of texts.
From a technical point of view, an application consists of three microservices, each of which solves a specific set of tasks:
- SA-Frontend is an Nginx web server that serves static React files.
- SA-WebApp is a web application written in Java that processes requests from the frontend.
- SA-Logic is a Python application that performs text tone analysis.
It is important to note that microservices do not exist in isolation. They realize the idea of “separation of duties”, but they, at the same time, need to interact with each other.

Data Flows in the Application
In the diagram above, you can see the numbered steps of the system, illustrating the data flows in the application. Let's sort them out:
- The browser requests the file from the server
index.html
(which, in turn, downloads the React-application package). - The user interacts with the application, this causes a call to a web application based on Spring.
- The web application redirects the text analysis request to the Python application.
- The Python application analyzes the text key and returns the result as a response to the request.
- Spring-application sends the response to the React-application (and, in turn, shows the result of the text analysis to the user).
The code for all these applications can be found here . I recommend that you copy this repository to yourself right now, as there are many interesting experiments with it ahead of us.
Running a microservice based application on a local computer
In order for the application to work, we need to run all three microservices. Let's start with the cutest of them - with the frontend application.
▍Configuring React for Local Development
In order to run the React application, you need to install the Node.js platform and NPM on your computer. After you install all this, go to the project folder using the terminal
sa-frontend
and execute the following command:npm install
By executing this command,
node_modules
dependencies of the React-application will be loaded into the folder , records of which are in the file package.json
. After the dependencies are loaded in the same folder, execute the following command:npm start
That's all. Now the React-application is running, access to it can be obtained by going to the browser at
localhost:3000
. You can change something in his code. You will immediately see the effect of these changes in the browser. This is possible thanks to the so-called hot-swappable modules. Thanks to this, front-end development turns into a simple and enjoyable experience.▍Preparation of React-application to the conclusion in the production
In order to actually use the React application, we need to convert it into a set of static files and give them to clients using a web server.
To build the React application, again using the terminal, go to the folder
sa-frontend
and execute the following command:npm run build
This will create a directory in the project folder
build
. It will contain all static files necessary for the operation of the React application.▍Service static files using Nginx
First you need to install and run the Nginx web server. Here you can download it and find instructions for installing and running. Then you need to copy the contents of the folder
sa-frontend/build
to the folder [your_nginx_installation_dir]/html
. With this approach, the file generated during the assembly process of the React application
index.html
will be available at [your_nginx_installation_dir]/html/index.html
. This is the file that, by default, the Nginx server issues when accessing it. The server is configured to listen on the port 80
, but you can configure it the way you want by editing the file [your_nginx_installation_dir]/conf/nginx.conf
. Now open your browser and go to
localhost:80
. You will see the React application page.
React-application serviced by the server Nginx
If you now enter something in the field
Type your sentence
and press the buttonSend
, nothing will happen. But, if you look at the console, you can see error messages there. In order to understand exactly where these errors occur, let’s analyze the application code.ФронтAnalysis of front-end application code
Looking at the code of the file
App.js
, we can see that pressing the button Send
calls the method analyzeSentence()
. The code for this method is shown below. In this case, note that for each line to which there is a comment of the form # Номер
, there is an explanation given below code. In the same way, we will parse other code fragments.analyzeSentence() {
fetch('http://localhost:8080/sentiment', { // #1
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
sentence: this.textField.getValue()})// #2
})
.then(response => response.json())
.then(data => this.setState(data)); // #3
}
1. The URL for the POST request. It is implied that at this address there is an application waiting for such requests.
2. The request body sent to the application. Here is an example of the request body:
{
sentence: "I like yogobella!"
}
3. Upon receiving a response to the request, the status of the component is updated. This causes the component to be re-rendered. If we receive data (that is, a JSON object containing the data entered and the calculated text estimate), we will derive the component
Polarity
, since the corresponding conditions will be met. Here is how we describe the component:const polarityComponent = this.state.polarity !== undefined ?
<Polarity sentence={this.state.sentence}
polarity={this.state.polarity}/> :
null;
The code, it seems, looks quite workable. What is wrong here, after all? If you assume that at the address to which the application tries to send a POST request, as long as there is nothing that can accept and process this request, then you will be absolutely right. Namely, to process requests arriving at the address
http://localhost:8080/sentiment
, we need to run a web application based on Spring.
We need a Spring application that can accept a POST request.
▍Set up a Spring-based Web Application
In order to deploy the Spring application, you need JDK8 and Maven and properly configured environment variables. After you install all this, you can continue to work on our project.
▍ Packing an application in a jar file
Navigate to the folder using the terminal
sa-webapp
and enter the following command:mvn install
After executing this command, a directory
sa-webapp
will be created in the folder target
. There will be a Java application packed in a jar file represented by a file sentiment-analysis-web-0.0.1-SNAPSHOT.jar
.▍ Start Java application
Navigate to the folder
target
and launch the application with the following command:java -jar sentiment-analysis-web-0.0.1-SNAPSHOT.jar
An error will occur during the execution of this command. In order to proceed with its correction, we can analyze the exception information in the stack trace data:
Error creating bean with name 'sentimentController': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'sa.logic.api.url' in value "${sa.logic.api.url}"
For us, the most important thing here is the mention of the impossibility of finding out the meaning
sa.logic.api.url
. Let's analyze the code in which the error occurs.▍A Java Application Code Analysis
Here is the code snippet where the error occurs.
@CrossOrigin(origins = "*")
@RestController
public class SentimentController {
@Value("${sa.logic.api.url}") // #1
private String saLogicApiUrl;
@PostMapping("/sentiment")
public SentimentDto sentimentAnalysis(
@RequestBody SentenceDto sentenceDto)
{
RestTemplate restTemplate = new RestTemplate();
return restTemplate.postForEntity(
saLogicApiUrl + "/analyse/sentiment", // #2
sentenceDto, SentimentDto.class)
.getBody();
}
}
- In S
entimentController
there is a fieldsaLogicApiUrl
. Its value is given by the propertysa.logic.api.url
. - The string is
saLogicApiUrl
concatenated with a value/analyse/sentiment
. Together they form an address for making a request to microservice that performs text analysis.
▍Property value setting
In Spring, the standard source for property values is a file
application.properties
that can be found at sa-webapp/src/main/resources
. But using it is not the only way to set property values. This can be done with the help of the following command:java -jar sentiment-analysis-web-0.0.1-SNAPSHOT.jar --sa.logic.api.url=WHAT.IS.THE.SA.LOGIC.API.URL
The value of this property should point to the address of our Python application.
By customizing it, we tell the Spring web application where it needs to go in order to perform text analysis requests.
In order not to complicate your life, decide that the Python application will be available at the address
localhost:5000
and try not to forget about it. As a result, the command to start the Spring application will look like this:java -jar sentiment-analysis-web-0.0.1-SNAPSHOT.jar --sa.logic.api.url=http://localhost:5000

Our system lacks a Python application.
Now we just have to launch the Python application and the system will work as expected.
▍Python application setup
In order to run a Python application, you need to have Python 3 and Pip installed, and you need to set the appropriate environment variables correctly.
▍Install dependencies
Navigate to the project folder
sa-logic/sa
and run the following commands:python -m pip install -r requirements.txt
python -m textblob.download_corpora
▍ Start application
After installing the dependencies, we are ready to run the application:
python sentiment_analysis.py
After executing this command, we will be told the following:
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
This means that the application is running and waiting for requests at
localhost:5000/
▍ Code Investigation
Consider the Python application code in order to understand how it responds to requests:
from textblob import TextBlob
from flask import Flask, request, jsonify
app = Flask(__name__) #1
@app.route("/analyse/sentiment", methods=['POST']) #2
def analyse_sentiment():
sentence = request.get_json()['sentence'] #3
polarity = TextBlob(sentence).sentences[0].polarity #4
return jsonify( #5
sentence=sentence,
polarity=polarity
)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000) #6
- Object initialization
Flask
. - Setting the address for performing POST requests to it.
- Retrieving a property
sentence
from the request body. - Initializing an anonymous object
TextBlob
and getting the valuepolarity
for the first sentence received in the request body (in our case, this is the only sentence that is being submitted for analysis). - Return of the answer, the body of which contains the text of the sentence and the indicator calculated for it
polarity
. - Launch a Flask application that will be available at the address
0.0.0.0:5000
(you can access it using the view constructionlocalhost:5000
).
Now microservices of which the application consists are launched. They are configured to interact with each other. Here is the diagram of the application at this stage of work.

All microservices of which the application consists are brought to a working state.
Now, before proceeding, open the React-application in the browser and try to analyze with it some suggestion. If everything is done correctly - after clicking on the button
Send
you will see the analysis results under the text field. In the next section, we will talk about how to run our microservices in Docker containers. This is necessary in order to prepare the application to run in the Kubernetes cluster.
Docker Containers
Kubernetes is a system for automating the deployment, scaling, and management of containerized applications. It is also called the “container orchestrator”. If Kubernetes works with containers, then we need to first get these containers before using this system. But first, let's talk about what containers are. Perhaps the best answer to the question of what it is can be found in the Docker documentation :
A container image is a lightweight, stand-alone, executable package containing an application that includes everything you need to run it: application code, runtime, system tools and libraries, settings. Containerized programs can be used in Linux and Windows environments, while they will always work the same regardless of infrastructure.
This means that containers can be run on any computers, including production servers, and in any environments the applications enclosed in them will work in the same way.
In order to explore the features of containers and compare them with other methods of launching applications, consider an example of servicing a React application using a virtual machine and a container.
▍Service of static files of a React-application by means of a virtual machine
Trying to organize the maintenance of static files by means of virtual machines, we will encounter the following disadvantages:
- Inefficient use of resources, since each virtual machine is a full-fledged operating system.
- Platform dependency. What works on a local computer may well not make money on a production server.
- Slow and resource-intensive scaling solution based on virtual machines.

Nginx web server, serving static files, running on a virtual machine.
If, however, containers are used to solve a similar problem, then in comparison with virtual machines, the following strengths can be noted:
- Efficient use of resources: working with the operating system using Docker.
- Platform independence. A container that a developer can run on his computer will work anywhere.
- Lightweight deployment through the use of layers of images.

Nginx web server, serving static files, running in a container
We compared virtual machines and containers on only a few points, but even this is enough to feel the strengths of the containers. Here you can find details of the containers Docker.
▍ Building a Container Image for a React Application
The main building block of the Docker container is the file
Dockerfile
. At the beginning of this file, make a record of the base image of the container, then there include a sequence of instructions indicating the order of creation of the container, which will meet the needs of some application. Before we work on the file
Dockerfile
, we’ll recall what we did to prepare the React application files for display on the Nginx server:- React-application package build (
npm run build
). - Starting Nginx server.
- Copying the contents of the directory
build
from the projectsa-frontend
folder to the server foldernginx/html
.
Below you can see the parallels between the creation of the container and the above actions performed on the local computer.
▍Preparation of the Dockerfile file for the SA-Frontend application
The instructions that will be contained in
Dockerfile
for the application SA-Frontend
consist of just two commands. The fact is that the Nginx development team has prepared a basic image for Nginx, which we will use to create our image. Here are the two steps we need to describe:- The basis of the image you need to make an image Nginx.
- The contents of the folder
sa-frontend/build
must be copied to the image foldernginx/html
.
If you move from this description to the file
Dockerfile
, it will look like this:FROM nginx
COPY build /usr/share/nginx/html
As you can see, everything is very simple, while the contents of the file even turns out to be quite readable and understandable. This file tells the system that you need to take an image
nginx
with everything that already exists in it, and copy the contents of the directory build
into the directory nginx/html
. Here you may have a question regarding where I know about exactly where to copy files from a folder
build
, that is, where the path came from /usr/share/nginx/html
. In fact, there is nothing complicated here either. The fact is that relevant information can be found in the image description .▍ Building the image and loading it into the repository
Before we can work with the ready image, we need to send it to the repository of images. To do this, we will use the free cloud platform for hosting Docker Hub images. At this stage you need to do the following:
- Install Docker .
- Register on the Docker Hub website.
- Log in to your account by running the following command in the terminal:
docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
Now you need to use the terminal to go to the directory
sa-frontend
and execute a command of the following form there:docker build -f Dockerfile -t $DOCKER_USER_ID/sentiment-analysis-frontend .
Here and further in similar commands
$DOCKER_USER_ID
it is necessary to replace your username with Docker Hub. For example, this part of the command might look like this: rinormaloku/sentiment-analysis-frontend
. In this case, this command can be reduced by removing from it
-f Dockerfile
, since in the folder in which we execute this command, this file already exists. In order to send the finished image to the repository, we need the following command:
docker push $DOCKER_USER_ID/sentiment-analysis-frontend
After its execution, check the list of your repositories on the Docker Hub in order to understand whether the image has been successfully sent to the cloud storage.
▍ Container launch
Now anyone can download and run an image known as
$DOCKER_USER_ID/sentiment-analysis-frontend
. In order to do this, you need to run the following sequence of commands:docker pull $DOCKER_USER_ID/sentiment-analysis-frontend
docker run -d -p 80:80 $DOCKER_USER_ID/sentiment-analysis-frontend
Now the container is running, and we can continue to work, creating other images we need. But, before continuing, let's deal with the construction
80:80
, which is found in the command to run the image and it may seem incomprehensible.- The first number
80
is the port number of the host (that is, the local computer). - The second number
80
is the container port to which the request should be redirected.
Consider the following illustration.

Port Forwarding
The system redirects requests from port
<hostPort>
to port<containerPort>
. That is, the call to the80
computerport isredirected to the80
containerport. Since the port is
80
open on the local computer, you can access the application from this computer atlocalhost:80
. If your system does not support Docker, you can run the application on the Docker virtual machine, whose address will look like<docker-machine ip>:80
. To find out the IP address of the Docker virtual machine, you can use the commanddocker-machine ip
. At this stage, after successfully launching the frontend application container, you should be able to open its page in the browser.
.File .dockerignore
Collecting the image of the application
SA-Frontend
, we could notice that this process is extremely slow. This is because the image assembly context must be sent to the Docker daemon. The directory that represents the build context is specified by the last argument of the command docker build
. In our case, there is a full stop at the end of this command. This causes the following structure to be included in the build context:sa-frontend:
| .dockerignore
| Dockerfile
| package.json
| README.md
+---build
+---node_modules
+---public
\---src
But of all the folders present here, we just need a folder
build
. Downloading anything else is a waste of time. You can speed up the build by pointing Docker to which directories you can ignore. It is in order to do this, we need a file .dockerignore
. To you, if you are familiar with the file .gitignore
, the structure of this file will surely seem familiar. It lists the directories that the image build system can ignore. In our case, the contents of this file look like this:node_modules
src
public
The file
.dockerignore
must be in the same folder as the file Dockerfile
. Now the image will be assembled in seconds. Let's do the image for a Java application now.
▍ Building a container image for a Java application
You know what, but you have already learned everything you need to create images of containers. That is why this section will be very short.
Open the file
Dockerfile
that is in the project folder sa-webapp
. If you read the text of this file, then in it you will find only two new constructions, starting with keywords ENV
and EXPOSE
:ENV SA_LOGIC_API_URL http://localhost:5000
…
EXPOSE 8080
The keyword
ENV
allows you to declare environment variables inside the Docker containers. In particular, in our case, it allows you to specify a URL to access the API of the application that performs the text analysis. The keyword
EXPOSE
allows you to tell Docker to open the port. We are going to use this port during the work with the application. Here you can see that there is no such command in the Dockerfile
application SA-Frontend
. This is only necessary for documentation purposes, in other words, this design is intended for the one who will read Dockerfile
. Building the image and sending it to the repository looks exactly the same as in the previous example. If you are not yet very confident in your abilities - the corresponding commands can be found in the file
README.md
in the folder sa-webapp
.▍ Building a Container Image for a Python Application
If you look at the contents of a file
Dockerfile
in a folder sa-logic
, you will not find anything new for yourself there. Commands for assembling an image and sending it to the repository should also be familiar to you, but they, as is the case with our other applications, can be found in the file README.md
in the folder sa-logic
.▍Testing Containerized Applications
Can you trust something that you haven’t tested? I can not, too. Let's test our containers.
- Launch the application container
sa-logic
and configure it to listen on the port5050
:docker run -d -p 5050:5000 $DOCKER_USER_ID/sentiment-analysis-logic
- Launch the application container
sa-webapp
and configure it to listen on the port8080
. In addition, we need to configure the port on which the Python application will wait for requests from a Java application, reassigning the environment variableSA_LOGIC_API_URL
:$ docker run -d -p 8080:8080 -e SA_LOGIC_API_URL='http://<container_ip or docker machine ip>:5000' $DOCKER_USER_ID/sentiment-analysis-web-app
To find out how to find out the IP address of a Docker container or virtual machine, refer to the README file .
Run the application container
sa-frontend
:docker run -d -p 80:80 $DOCKER_USER_ID/sentiment-analysis-frontend
Now everything is ready to go to the browser at the address
localhost:80
and test the application. Note that if you changed the port for
sa-webapp
, or if you are working with a Docker virtual machine, you will need to edit the file App.js
from the folder sa-frontend
by changing the IP address or port number in the method analyzeSentence()
, replacing the current information with the outdated data. After this, you need to re-assemble the image and use it. Here is the diagram of our application now.

Microservices are performed in containers
Results: why do we need Kubernetes cluster?
We have just studied the files
Dockerfile
, talked about how to collect images and send them to the Docker repository. In addition, we learned to speed up the assembly of images using the file .dockerignore
. As a result, our microservices are now running in Docker containers. Here you may have a fully justified question about why we need Kubernetes. The second part of this material will be devoted to the answer to this question. In the meantime, consider the following question: Suppose that our web-based text analysis application has become worldwide popular. Every minute millions of requests come to him. This means that mikroservisy
sa-webapp
and sa-logic
will be under huge load. How to scale containers in which microservices are executed?