How to deploy a Ruby on Rails application with HAProxy Ingress, unicorn / puma, and web sockets

Original author: Rahul Mahale
  • Transfer

After several months of testing, we finally moved the Ruby on Rails application to production with the Kubernetes cluster.


In this article, I’ll show you how to configure Path-based routing for a Ruby on Rails application in Kubernetes with a HAProxy Ingress controller.


image


It is assumed that you are roughly aware of what it is like , deployment , services , configuration map and Ingress in Kubernetes


Usually in a Rails application there are such services as unicorn / puma, sidekiq / delayed-job / resque, web sockets and some special API services. We had one web service, open to the outside through the balancer, and everything worked fine. But traffic grew, and you had to route it to a URL or Path.


Kubernetes has no ready-made load balancing solution of this type. The alb-ingress-controller is already being developed for it , but it is still not suitable for the alpha stage and for production.


For Path-based routing, the Ingress controller was best .


We studied the question and found out that k8s has different solutions for Ingress.



We experimented with nginx-ingress and HAProxy and settled on HAProxy - it is better suited for Rails web sockets, which we used in the project.


I'll tell you step by step how to tie the HAProxy Ingress to a Rails application.


Configuring Rails Applications with the HAProxy Ingress Controller


Here is what we will do:


  • Create a Rails application with different services and deployments.
  • Create a TLS secret for SSL.
  • Create a HAProxy Ingress configuration map.
  • Create the HAProxy Ingress controller.
  • Let's open access to Ingress through service of type LoadBalancer.
  • Configure the DNS applications for the service Ingress.
  • Create different Ingress rules for Path-based routing.
  • Test path based routing.

Let's create a deployment manifest for Rails applications for different services — web (unicorn), background tasks (sidekiq), web socket (ruby thin), API (dedicated unicorn).


Here is our web application and service template.


---
apiVersion: v1
kind: Deployment
metadata:
  name: test-production-web
  labels:
    app: test-production-web
  namespace: test
spec:
  template:
    metadata:
      labels:
        app: test-production-web
    spec:
      containers:
      - image: <your-repo>/<your-image-name>:latest
        name: test-production
        imagePullPolicy: Always
       env:
        - name: POSTGRES_HOST
          value: test-production-postgres
        - name: REDIS_HOST
          value: test-production-redis
        - name: APP_ENV
          value: production
        - name: APP_TYPE
          value: web
        - name: CLIENT
          value: test
        ports:
        - containerPort: 80
      imagePullSecrets:
        - name: registrykey
---
apiVersion: v1
kind: Service
metadata:
  name: test-production-web
  labels:
    app: test-production-web
  namespace: test
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: test-production-web

Here is the background application template and service template.


---
apiVersion: v1
kind: Deployment
metadata:
  name: test-production-background
  labels:
    app: test-production-background
  namespace: test
spec:
  template:
    metadata:
      labels:
        app: test-production-background
    spec:
      containers:
      - image: <your-repo>/<your-image-name>:latest
        name: test-production
        imagePullPolicy: Always
       env:
        - name: POSTGRES_HOST
          value: test-production-postgres
        - name: REDIS_HOST
          value: test-production-redis
        - name: APP_ENV
          value: production
        - name: APP_TYPE
          value: background
        - name: CLIENT
          value: test
        ports:
        - containerPort: 80
      imagePullSecrets:
        - name: registrykey
---
apiVersion: v1
kind: Service
metadata:
  name: test-production-background
  labels:
    app: test-production-background
  namespace: test
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: test-production-background

Here is the application web socket and service template.


---
apiVersion: v1
kind: Deployment
metadata:
  name: test-production-websocket
  labels:
    app: test-production-websocket
  namespace: test
spec:
  template:
    metadata:
      labels:
        app: test-production-websocket
    spec:
      containers:
      - image: <your-repo>/<your-image-name>:latest
        name: test-production
        imagePullPolicy: Always
       env:
        - name: POSTGRES_HOST
          value: test-production-postgres
        - name: REDIS_HOST
          value: test-production-redis
        - name: APP_ENV
          value: production
        - name: APP_TYPE
          value: websocket
        - name: CLIENT
          value: test
        ports:
        - containerPort: 80
      imagePullSecrets:
        - name: registrykey
---
apiVersion: v1
kind: Service
metadata:
  name: test-production-websocket
  labels:
    app: test-production-websocket
  namespace: test
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: test-production-websocket

Here is the application API and service information.


---
`apiVersion: v1
kind: Deployment
metadata:
  name: test-production-api
  labels:
    app: test-production-api
  namespace: test
spec:
  template:
    metadata:
      labels:
        app: test-production-api
    spec:
      containers:
      - image: <your-repo>/<your-image-name>:latest
        name: test-production
        imagePullPolicy: Always
       env:
        - name: POSTGRES_HOST
          value: test-production-postgres
        - name: REDIS_HOST
          value: test-production-redis
        - name: APP_ENV
          value: production
        - name: APP_TYPE
          value: api
        - name: CLIENT
          value: test
        ports:
        - containerPort: 80
      imagePullSecrets:
        - name: registrykey
---
apiVersion: v1
kind: Service
metadata:
  name: test-production-api
  labels:
    app: test-production-api
  namespace: test
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: test-production-api

Let's run the manifest command kubectl apply.


$ kubectl apply -f test-web.yml -f test-background.yml -f test-websocket.yml -f test-api.yml
deployment "test-production-web" created
service "test-production-web" created
deployment "test-production-background" created
service "test-production-background" created
deployment "test-production-websocket" created
service "test-production-websocket" created
deployment "test-production-api" created
service "test-production-api" created

Once the application is deployed and launched, you will need to create a HAProxy Ingress. But first, let's create a TLS secret with an SSL key and certificate.


It will also allow HTTPS for the application URL and terminate it on L7.


$ kubectl create secret tls tls-certificate --key server.key --cert server.pem


server.keyhere is our SSL key, and server.pemour SSL certificate is in pem format.


Now create the resources of the HAProxy controller.


HAProxy configuration map


All available configuration options for HAProxy are here .


apiVersion: v1
data:
    dynamic-scaling: "true"
    backend-server-slots-increment: "4"
kind: ConfigMap
metadata:
  name: haproxy-configmap
  namespace: test

Deploying the HAProxy Ingress Controller


Deployment pattern for the Ingress Controller with at least two replicas to manage sequential deployment.


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    run: haproxy-ingress
  name: haproxy-ingress
  namespace: test
spec:
  replicas: 2
  selector:
    matchLabels:
      run: haproxy-ingress
  template:
    metadata:
      labels:
        run: haproxy-ingress
    spec:
      containers:
      - name: haproxy-ingress
        image: quay.io/jcmoraisjr/haproxy-ingress:v0.5-beta.1
        args:
        - --default-backend-service=$(POD_NAMESPACE)/test-production-web
        - --default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate
        - --configmap=$(POD_NAMESPACE)/haproxy-configmap
        - --ingress-class=haproxy
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        - name: stat
          containerPort: 1936
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

In this manifesto, we are particularly interested in the arguments passed to the controller.
--default-backend-serviceThis is the service that the application will use if no rules match the query.


We have this service test-production-web, but it can be a custom 404 page or something like that - you decide.


--default-ssl-certificate- this is the SSL secret we just created. It will terminate SSL on L7, and the application will be externally accessible via HTTPS.


HAProxy Ingress Service


This is a type of service LoadBalancerthat allows client traffic to access our Ingress Controller.


LoadBalancer has access to the public network and Kubernetes internal network, and on L7 it routes traffic for the Ingress controller.


apiVersion: v1
kind: Service
metadata:
  labels:
    run: haproxy-ingress
  name: haproxy-ingress
  namespace: test
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  - name: stat
    port: 1936
    protocol: TCP
    targetPort: 1936
  selector:
    run: haproxy-ingress

Let's apply all the HAProxy manifests.


$ kubectl apply -f haproxy-configmap.yml -f haproxy-deployment.yml -f haproxy-service.yml
configmap "haproxy-configmap" created
deployment "haproxy-ingress" created
service "haproxy-ingress" created

When all resources are running, specify the LoadBalancer endpoint.


$ kubectl -n test get svc haproxy-ingress -o wide
NAMETYPECLUSTER-IP       EXTERNAL-IP                                                            PORT(S)                                     AGE       SELECTOR
haproxy-ingress   LoadBalancer   100.67.194.186   a694abcdefghi11e8bc3b0af2eb5c5d8-806901662.us-east-1.elb.amazonaws.com   80:31788/TCP,443:32274/TCP,1936:32157/TCP   2m        run=ingress

DNS mapping to application URL


Once we specify the ELB endpoint for the Ingress service, we need to match the DNS service and the request URL (for example test-rails-app.com).


Implementing Ingress


The most difficult thing behind is time to configure Ingress and rules based on the Path.


We need the following rules.


Requests to https://test-rails-app.com will be processed by the service test-production-web.


Requests to https://test-rails-app.com/websocket will be processed by the service test-production-websocket.


Requests to https://test-rails-app.com/api will be processed by the service test-production-api.


Let's create an Ingress manifest with all these rules.


---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
  namespace: test
spec:
  tls:
    - hosts:
      - test-rails-app.com
      secretName: tls-certificate
  rules:
    - host: test-rails-app.com
      http:
        paths:
          - path: /
            backend:
              serviceName: test-production-web
              servicePort: 80
          - path: /api
            backend:
              serviceName: test-production-api
              servicePort: 80
          - path: /websocket
            backend:
              serviceName: test-production-websocket
              servicePort: 80

In case of configuration changes, we have annotations for Ingress resources .


As expected, by default our traffic is /routed to the service test-production-web, /api- at test-production-api, and /websocket- at test-production-websocket.


We needed Path-based routing and SSL termination on L7 at Kubernetes, and the implementation of Ingress solved this problem.


Also popular now: