Kubernetes NodePort vs LoadBalancer vs Ingress? When and what to use?

Original author: Sandeep Dinesh
  • Transfer


Recently I was asked what is the difference between NodePorts, LoadBalancers and Ingress. These are all different ways to get external traffic into the cluster. Let's see how they differ, and when to use each of them.


Note: The recommendations are based on the Google Kubernetes Engine . If you are working in a different cloud, on your own server, on a minicube or something else, there will be differences. I do not go into technical details. For details, see the official documentation .


Clusterip


ClusterIP is the default Kubernetes service. It provides a service within the cluster that can be accessed by other applications within the cluster. No external access.


The YAML for the ClusterIP service is as follows:


apiVersion: v1
kind: Service
metadata: 
 name: my-internal-service
selector:   
 app: my-app
spec:
 type: ClusterIP
 ports: 
 - name: http
   port: 80
   targetPort: 80
   protocol: TCP

Why did I talk about the ClusterIP service if it cannot be accessed from the Internet? There is a way: using Kubernetes proxy!



Launch Kubernetes proxy server:


$ kubectl proxy --port=8080

Now you can navigate the Kubernetes API to access this service using the scheme:


http: // localhost: 8080 / api / v1 / proxy / namespaces / / services /:/

We use this address to access the above service:


http: // localhost: 8080 / api / v1 / proxy / namespaces / default / services / my-internal-service: http /


When to use?


There are several scenarios for using Kubernetes proxy to access services.
Debugging services or connecting to them directly from a laptop for other purposes.
Allowing internal traffic, displaying internal panels, etc.


Since this method requires kubectl to run as an authenticated user, you should not use it to provide access to a service on the Internet or for production services.


NodePort


NodePort service is the most primitive way to direct external traffic to the service. NodePort, as the name implies, opens the specified port for all Nodes (virtual machines), and traffic to this port is redirected to the service.



The YAML for the NodePort service looks like this:


apiVersion: v1
kind: Service
metadata: 
 name: my-nodeport-service
selector:   
 app: my-app
spec:
 type: NodePort
 ports: 
 - name: http
   port: 80
   targetPort: 80
   nodePort: 30036
   protocol: TCP

In essence, the NodePort service has two differences from the regular ClusterIP service. Firstly, the type of NodePort. There is an additional port called nodePort, which indicates which port to open on the nodes. If we do not specify this port, it will select random. In most cases, let Kubernetes choose the port itself. As thockin says , port selection is not so simple.


When to use?


The method has many disadvantages:
Only one service sits on a port.
Only ports 30000–32767 are available.
If the IP address of a host / virtual machine changes, you will have to figure it out


For these reasons, I do not recommend using this method in production to directly provide access to the service. But if you do not care about the constant availability of the service, and the level of costs is not, this method is for you. A good example of such an application is a demo or a temporary gag.


Loadbalancer


The LoadBalancer service is a standard way of providing a service on the Internet. On GKE, he will deploy a Network Load Balancer , which will provide an IP address. This IP address will direct all traffic to the service.



When to use?


If you want to open the service directly, this is the default method. All traffic of the specified port will be directed to the service. No filtering, no routing, etc. This means that we can direct such types of traffic to the service as HTTP, TCP, UDP, Websockets, gRPC and the like.


! But there is one drawback. Each service that we deploy using LoadBalancer needs its own IP address, which can cost a pretty penny.


Ingress


Unlike the above examples, Ingress itself is not a service. It faces several services and acts as an “intelligent router” or cluster entry point.


There are different types of Ingress controllers with rich features.


The GKE controller starts the HTTP (S) Load Balancer by default . At the same time, routing to the backend services based on paths and subdomains will be available to you. For example, we send everything to foo.yourdomain.com to the foo service, and the path yourdomain.com/bar/ with all attachments to the bar service.



The YAML for an Ingress object on a GKE with L7 HTTP Load Balancer is as follows:


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: my-ingress
spec:
 backend:
   serviceName: other
   servicePort: 8080
 rules:
 - host: foo.mydomain.com
   http:
     paths:
     - backend:
         serviceName: foo
         servicePort: 8080
 - host: mydomain.com
   http:
     paths:
     - path: /bar/*
       backend:
         serviceName: bar
         servicePort: 8080

When to use?


С одной стороны, Ingress — один из лучших способов раскрыть сервисы. С другой стороны — один из самых сложных. Существует множество контроллеров Ingress: Google Cloud Load Balancer, Nginx, Contour, Istio и прочие. Есть и плагины для Ingress-контроллеров, такие как cert-manager, который автоматически предоставляет SSL-сертификаты для сервисов.


Ingress хорош при раскрытии нескольких сервисов на одном IP-адресе, когда все сервисы используют общий протокол L7 (обычно HTTP). Используя встроенную интеграцию GCP, вы платите только за один балансировщик нагрузки. А поскольку Ingress «умный», вы из коробки получаете множество функций (например, SSL, Auth, Routing и т.д.)


Thanks for the diagrams, Ahmet Alp Balkan .


This is not the most technically accurate diagram, but it illustrates well the operation of NodePort.


Original: Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what? .


Also popular now: