Setup path based routing for a Rails app with HAProxy Ingress

Rahul Mahale

Rahul Mahale

February 28, 2018

After months of testing we recently moved a Ruby on Rails application to production that is using Kubernetes cluster.

In this article we will discuss how to setup path based routing for a Ruby on Rails application in kubernetes using HAProxy ingress.

This post assumes that you have basic understanding of Kubernetes terms like pods, deployments, services, configmap and ingress.

Typically our Rails app has services like unicorn/puma, sidekiq/delayed-job/resque, Websockets and some dedicated API services. We had one web service exposed to the world using load balancer and it was working well. But as the traffic increased it became necessary to route traffic based on URLs/path.

However Kubernetes does not supports this type of load balancing out of the box. There is work in progress for alb-ingress-controller to support this but we could not rely on it for production usage as it is still in alpha.

The best way to achieve path based routing was to use ingress controller.

We researched and found that there are different types of ingress available in k8s world.

  1. nginx-ingress
  2. ingress-gce
  3. HAProxy-ingress
  4. traefik
  5. voyager

We experimented with nginx-ingress and HAProxy and decided to go with HAProxy. HAProxy has better support for Rails websockets which we needed in the project.

We will walk you through step by step on how to use haproxy ingress in a Rails app.

Configuring Rails app with HAProxy ingress controller

Here is what we are going to do.

  • Create a Rails app with different services and deployments.
  • Create tls secret for SSL.
  • Create HAProxy ingress configmap.
  • Create HAProxy ingress controller.
  • Expose ingress with service type LoadBalancer
  • Setup app DNS with ingress service.
  • Create different ingress rules specifying path based routing.
  • Test the path based routing.

Now let's build Rails application deployment manifest for services like web(unicorn),background(sidekiq), Websocket(ruby thin),API(dedicated unicorn).

Here is our web app deployment and service template.

1
2---
3apiVersion: v1
4kind: Deployment
5metadata:
6  name: test-production-web
7  labels:
8    app: test-production-web
9  namespace: test
10spec:
11  template:
12    metadata:
13      labels:
14        app: test-production-web
15    spec:
16      containers:
17      - image: <your-repo>/<your-image-name>:latest
18        name: test-production
19        imagePullPolicy: Always
20       env:
21        - name: POSTGRES_HOST
22          value: test-production-postgres
23        - name: REDIS_HOST
24          value: test-production-redis
25        - name: APP_ENV
26          value: production
27        - name: APP_TYPE
28          value: web
29        - name: CLIENT
30          value: test
31        ports:
32        - containerPort: 80
33      imagePullSecrets:
34        - name: registrykey
35---
36apiVersion: v1
37kind: Service
38metadata:
39  name: test-production-web
40  labels:
41    app: test-production-web
42  namespace: test
43spec:
44  ports:
45  - port: 80
46    protocol: TCP
47    targetPort: 80
48  selector:
49    app: test-production-web
50

Here is background app deployment and service template.

1
2---
3apiVersion: v1
4kind: Deployment
5metadata:
6  name: test-production-background
7  labels:
8    app: test-production-background
9  namespace: test
10spec:
11  template:
12    metadata:
13      labels:
14        app: test-production-background
15    spec:
16      containers:
17      - image: <your-repo>/<your-image-name>:latest
18        name: test-production
19        imagePullPolicy: Always
20       env:
21        - name: POSTGRES_HOST
22          value: test-production-postgres
23        - name: REDIS_HOST
24          value: test-production-redis
25        - name: APP_ENV
26          value: production
27        - name: APP_TYPE
28          value: background
29        - name: CLIENT
30          value: test
31        ports:
32        - containerPort: 80
33      imagePullSecrets:
34        - name: registrykey
35---
36apiVersion: v1
37kind: Service
38metadata:
39  name: test-production-background
40  labels:
41    app: test-production-background
42  namespace: test
43spec:
44  ports:
45  - port: 80
46    protocol: TCP
47    targetPort: 80
48  selector:
49    app: test-production-background
50

Here is websocket app deployment and service template.

1
2---
3apiVersion: v1
4kind: Deployment
5metadata:
6  name: test-production-websocket
7  labels:
8    app: test-production-websocket
9  namespace: test
10spec:
11  template:
12    metadata:
13      labels:
14        app: test-production-websocket
15    spec:
16      containers:
17      - image: <your-repo>/<your-image-name>:latest
18        name: test-production
19        imagePullPolicy: Always
20       env:
21        - name: POSTGRES_HOST
22          value: test-production-postgres
23        - name: REDIS_HOST
24          value: test-production-redis
25        - name: APP_ENV
26          value: production
27        - name: APP_TYPE
28          value: websocket
29        - name: CLIENT
30          value: test
31        ports:
32        - containerPort: 80
33      imagePullSecrets:
34        - name: registrykey
35---
36apiVersion: v1
37kind: Service
38metadata:
39  name: test-production-websocket
40  labels:
41    app: test-production-websocket
42  namespace: test
43spec:
44  ports:
45  - port: 80
46    protocol: TCP
47    targetPort: 80
48  selector:
49    app: test-production-websocket
50

Here is API app deployment and service info.

1
2---
3apiVersion: v1
4kind: Deployment
5metadata:
6  name: test-production-api
7  labels:
8    app: test-production-api
9  namespace: test
10spec:
11  template:
12    metadata:
13      labels:
14        app: test-production-api
15    spec:
16      containers:
17      - image: <your-repo>/<your-image-name>:latest
18        name: test-production
19        imagePullPolicy: Always
20       env:
21        - name: POSTGRES_HOST
22          value: test-production-postgres
23        - name: REDIS_HOST
24          value: test-production-redis
25        - name: APP_ENV
26          value: production
27        - name: APP_TYPE
28          value: api
29        - name: CLIENT
30          value: test
31        ports:
32        - containerPort: 80
33      imagePullSecrets:
34        - name: registrykey
35---
36apiVersion: v1
37kind: Service
38metadata:
39  name: test-production-api
40  labels:
41    app: test-production-api
42  namespace: test
43spec:
44  ports:
45  - port: 80
46    protocol: TCP
47    targetPort: 80
48  selector:
49    app: test-production-api
50

Let's launch this manifest using kubectl apply.

1
2$ kubectl apply -f test-web.yml -f test-background.yml -f test-websocket.yml -f test-api.yml
3deployment "test-production-web" created
4service "test-production-web" created
5deployment "test-production-background" created
6service "test-production-background" created
7deployment "test-production-websocket" created
8service "test-production-websocket" created
9deployment "test-production-api" created
10service "test-production-api" created
11

Once our app is deployed and running we should create HAProxy ingress. Before that let's create a tls secret with our SSL key and certificate.

This is also used to enable HTTPS for app URL and to terminate it on L7.

1
2$ kubectl create secret tls tls-certificate --key server.key --cert server.pem
3

Here server.key is our SSL key and server.pem is our SSL certificate in pem format.

Now let's Create HAProxy controller resources.

HAProxy configmap

For all the available configuration parameters from HAProxy refer here.

1apiVersion: v1
2data:
3  dynamic-scaling: "true"
4  backend-server-slots-increment: "4"
5kind: ConfigMap
6metadata:
7  name: haproxy-configmap
8  namespace: test

HAProxy Ingress controller deployment

Deployment template for the Ingress controller with at-least 2 replicas to manage rolling deploys.

1apiVersion: extensions/v1beta1
2kind: Deployment
3metadata:
4  labels:
5    run: haproxy-ingress
6  name: haproxy-ingress
7  namespace: test
8spec:
9  replicas: 2
10  selector:
11    matchLabels:
12      run: haproxy-ingress
13  template:
14    metadata:
15      labels:
16        run: haproxy-ingress
17    spec:
18      containers:
19        - name: haproxy-ingress
20          image: quay.io/jcmoraisjr/haproxy-ingress:v0.5-beta.1
21          args:
22            - --default-backend-service=$(POD_NAMESPACE)/test-production-web
23            - --default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate
24            - --configmap=$(POD_NAMESPACE)/haproxy-configmap
25            - --ingress-class=haproxy
26          ports:
27            - name: http
28              containerPort: 80
29            - name: https
30              containerPort: 443
31            - name: stat
32              containerPort: 1936
33          env:
34            - name: POD_NAME
35              valueFrom:
36                fieldRef:
37                  fieldPath: metadata.name
38            - name: POD_NAMESPACE
39              valueFrom:
40                fieldRef:
41                  fieldPath: metadata.namespace

Notable fields in above manifest are arguments passed to controller.

--default-backend-service is the service when No rule is matched your request will be served by this app.

In our case it is test-production-web service, But it can be custom 404 page or whatever better you think.

--default-ssl-certificate is the SSL secret we just created above this will terminate SSL on L7 and our app is served on HTTPS to outside world.

HAProxy Ingress service

This is the LoadBalancer type service to allow client traffic to reach our Ingress Controller.

LoadBalancer has access to both public network and internal Kubernetes network while retaining the L7 routing of the Ingress Controller.

1apiVersion: v1
2kind: Service
3metadata:
4  labels:
5    run: haproxy-ingress
6  name: haproxy-ingress
7  namespace: test
8spec:
9  type: LoadBalancer
10  ports:
11    - name: http
12      port: 80
13      protocol: TCP
14      targetPort: 80
15    - name: https
16      port: 443
17      protocol: TCP
18      targetPort: 443
19    - name: stat
20      port: 1936
21      protocol: TCP
22      targetPort: 1936
23  selector:
24    run: haproxy-ingress

Now let's apply all the manifests of HAProxy.

1
2$ kubectl apply -f haproxy-configmap.yml -f haproxy-deployment.yml -f haproxy-service.yml
3configmap "haproxy-configmap" created
4deployment "haproxy-ingress" created
5service "haproxy-ingress" created
6

Once all the resources are running get the LoadBalancer endpoint using.

1
2$ kubectl -n test get svc haproxy-ingress -o wide
3
4NAME               TYPE           CLUSTER-IP       EXTERNAL-IP                                                            PORT(S)                                     AGE       SELECTOR
5haproxy-ingress   LoadBalancer   100.67.194.186   a694abcdefghi11e8bc3b0af2eb5c5d8-806901662.us-east-1.elb.amazonaws.com   80:31788/TCP,443:32274/TCP,1936:32157/TCP   2m        run=ingress
6

DNS mapping with application URL

Once we have ELB endpoint of ingress service, map the DNS with URL like test-rails-app.com.

Ingress Implementation

Now after doing all the hard work it is time to configure ingress and path based rules.

In our case we want to have following rules.

https://test-rails-app.com requests to be served by test-production-web.

https://test-rails-app.com/websocket requests to be served by test-production-websocket.

https://test-rails-app.com/api requests to be served by test-production-api.

Let's create a ingress manifest defining all the rules.

1---
2apiVersion: extensions/v1beta1
3kind: Ingress
4metadata:
5  name: ingress
6  namespace: test
7spec:
8  tls:
9    - hosts:
10        - test-rails-app.com
11      secretName: tls-certificate
12  rules:
13    - host: test-rails-app.com
14      http:
15        paths:
16          - path: /
17            backend:
18              serviceName: test-production-web
19              servicePort: 80
20          - path: /api
21            backend:
22              serviceName: test-production-api
23              servicePort: 80
24          - path: /websocket
25            backend:
26              serviceName: test-production-websocket
27              servicePort: 80

Moreover there are Ingress Annotations for adjusting configuration changes.

As expected,now our default traffic on / is routed to test-production-web service.

/api is routed to test-production-api service.

/websocket is routed to test-production-websocket service.

Thus ingress implementation solves our purpose of path based routing and terminating SSL on L7 on Kubernetes.

If this blog was helpful, check out our full blog archive.

Stay up to date with our blogs.

Subscribe to receive email notifications for new blog posts.