HAProxy install w/Kubernetes 1.16.x

Mindwatering Incorporated

Author: Tripp W Black

Created: 11/16/2019 at 02:02 PM

 

Category:
Linux
Kubernetes

Tasks:
Add an HAProxy w/in K8s to service a demo nginx app on multiple IPs. This was tested using the K8s practice cluster set-up in the support reference app.

Note:
These steps are based off the ones on the haproxy.com web site.

There are three steps:
- set-up of the ingress-controller
- - creates the Namespace, the ServiceAccount, the ClusterRole and it's ClusterRoleBinding (via RBAC and the ServiceAccount just created), and default 404 route/answer pod
- set-up of the Deployment of the app, labeled app, with its Ingress and Service
- - creates a Deployment of an 'app' using a predefined image
- - creates the Service that is bound to 'app' that connects the app to the haproxy and sets-up the round-robin
- - creates the Ingress that maps the domain, nowhere.milkyway, to the app, 'app'

1. Get the Ingress Controller:
$ cd ~/working/
$ curl https://raw.githubusercontent.com/haproxytech/kubernetes-ingress/master/deploy/haproxy-ingress.yaml -O
$ kubectl apply -f haproxy-ingress.yaml
haproxy-ingress.yamlhaproxy-ingress.yaml

Confirm the new pods are running:
$ kubectl get namespace
< one of the entries is the new one: haproxy-controller >
$ kubectl get pods --namespace=haproxy-controller
< returns two new running pods: haproxy-ingress-x-y, and ingress-default-backend-x-y >

Get the cluster service ports in use:
$ kubectl get svc --namespace=haproxy-controller
< this returns the ports, we will test on 32639 (non-SSL) port. A sample return is below. >
haproxy-ingress NodePort 10.104.128.27 <none> 80:32639/TCP,443:32406/TCP,1024:31986/TCP 3m
ingress-default-backend ClusterIP 10.109.207.226 <none> 8080/TCP

Confirm the 404 default page is working:
$ curl -I -H 'host: nowhere.milky.way' 'http://k8master:32639'
< output should be like below >
HTTP/1.1 404 Not Found
date: Sat, 16 Nov 2019 19:20:52 GMT
content-length: 21
content-type: text/plain; charset=utf-8

2. Create the deployment file, and add the ConfigMap.
In this example, we are running a nginx image.

1AppDeploy.yaml1AppDeploy.yaml
$ kubectl apply -f 1AppDeploy.yaml

2ConfigMap.yaml
$ kubectl apply -f 2ConfigMap.yaml

Confirm the nginx web service is running in the pod:
$ kubectl get pods --all-namespaces
< output will include two pods: nginx-demo-x-y >

Test the pod EPs, and cluster nginx-demo SVC port:
$ kubectl get ep
< output returns the pod calico network addresses: e.g. nginx-demo 192.168.145.11:80,192.168.23.6:80 >
$ curl -I 192.168.145.11
< out returns HTTP/1.1 200 OK, plus additional lines. Success! >
$ curl -I 192.168.23.6
< out returns HTTP/1.1 200 OK, plus additional lines. Success! >

$ kubectl get svc --namespace=haproxy-controller
< output returns the 10. cluster IP info: e.g. nginx-demo 10.100.141.57:8080 >

Try out the proxy at its SVC point:
$ curl -I 10.100.141.57:80 -H 'Host: nowhere.milky.way'
< output returns HTTP/1.1 200 OK, plus additional lines. Success! >

Confirm its working, really at its SVC point, by removing the host name passed:
$ curl 10.100.141.57:80
< output returns the text: default backend - 404 >

Confirm its working at the k8master front-end:
$ curl -I k8master:32639 -H 'Host: nowhere.milky.way'
HTTP/1.1 200 OK
server: nginx/1.12.1
date: Sun, 17 Nov 2019 02:44:10 GMT
content-type: text/html
content-length: 612
last-modified: Tue, 11 Jul 2017 13:24:06 GMT
etag: "5964d176-264"
accept-ranges: bytes



previous page

×