Post Image

Kubernetes Test Load Balancing

If you are the skeptical type like I am and need to see proof of things working, when setting up load balancing between pods in a kubernetes cluster running a base nginx server on multiple pods does not show your request getting handled by different pods. In this post I will walk you through a deployment that you will be able to see your request getting handled by different pods.

 

Prerequisites

  1. Functioning Kubernetes cluster with more than 1 node
  2. MetalLB installed 
    1. Available IP pool

 

Container Image

The key to this is to use the hello-app container image from Googles container repository. This container is a simple web server that instead of the default web page, shows the hostname of the pod/container. You can find the repo here.

 

Container Deployment YAML

Below I have a yaml file that will use the hello-app container image from grc.io and spins up 10 pods running that container image.

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: hello-world
  name: hello-world
spec:
  replicas: 10
  selector:
    matchLabels:
      app: hello-world
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: hello-world
    spec:
      containers:
      - image: gcr.io/google-samples/hello-app:1.0
        name: hello-app
        resources: {}
status: {}

You can apply this with the following command assuming it is saved in a file hello_world_pod_deployment.yaml in your current directory.

$  kubectl apply -f hello_world_pod_deployment.yaml
deployment.apps/hello-world created
$

If you get pods you will see you have 10 pods running your hello-world container.

$ kubectl get pods | grep hello-world
hello-world-75b9d468b8-rm6sd        1/1     Running   0          2m42s
hello-world-75b9d468b8-2r4h4        1/1     Running   0          2m42s
hello-world-75b9d468b8-qc5ft        1/1     Running   0          2m42s
hello-world-75b9d468b8-dn855        1/1     Running   0          2m42s
hello-world-75b9d468b8-bk2ww        1/1     Running   0          2m42s
hello-world-75b9d468b8-xpff8        1/1     Running   0          2m42s
hello-world-75b9d468b8-bwlr7        1/1     Running   0          2m42s
hello-world-75b9d468b8-fptdw        1/1     Running   0          2m42s
hello-world-75b9d468b8-wkxvw        1/1     Running   0          2m42s
hello-world-75b9d468b8-5sbvw        1/1     Running   0          2m42s
$

 

At this point we have a bunch of pods running, now we need to deploy a load balancing service that we can use to access the cluster of pods.

 

Service Deployment YAML

Now that our pods are up and running to access them we need to create a service. I want this to be accessible externally so this deployment will be utilizing a specific IP address via MetalLB.

apiVersion: v1
kind: Service
metadata:
  name: hello-world
  annotations:
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: hello-world
  type: LoadBalancer
  loadBalancerIP: 10.204.255.51

The key points in this yaml file is that we will be presenting port 80 to the world that routes to port 8080 on the pod (specified under spec --> ports --> port & targetPort)

spec --> selector --> app: hello-world tells the service that it will route to any pods that have an app label of "hello-world"

spec --> type: LoadBalancer specifies the service as a load balancer

spec --> loadBalancerIP: 10.204.255.51 will set MetalLB to use that IP address outside the cluster to provide a consistent IP address for my containers. You will want to update this per your network.

 

Assuming that yaml file is saved in your current directory and named service_deployment.yaml you can deploy that service with the following command.

$ kubectl apply -f service_deployment.yaml
service/hello-world created
$ 

And you can validate your service is running

$ kubectl get service
NAME          TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
hello-world   LoadBalancer   10.152.183.62    10.204.255.51   80:31591/TCP   24s
$

So now you can see how to access the service from within the cluster and from outside of the cluster via CLUSTER-IP and EXTERNAL-IP respectivly.

 

Testing

Now that we are fully deployed lets test and validate it balances the traffic between the pods.

 

Test Within the Cluster

To test from within the cluster you can use the command line utility curl to the CLUSTER-IP address. in my case its 10.152.183.62

$ curl 10.152.183.62
Hello, world!
Version: 1.0.0
Hostname: hello-world-75b9d468b8-2vn2q
$ curl 10.152.183.62
Hello, world!
Version: 1.0.0
Hostname: hello-world-75b9d468b8-dfnlg
$ curl 10.152.183.62
Hello, world!
Version: 1.0.0
Hostname: hello-world-75b9d468b8-8f8kr
$

And if you cross reference that with the hostnames of your pods kubectl get pods -o wide You can see that you hit a different pod each time

 

Test Outside the Cluster with a Browser

To test outside of your cluster with a browser, just point the browser to the EXTERNAL-IP in my case its 10.204.255.51. 

The reason I call this out is because in my experience each time you refresh the browser it lands on the same pod which makes it appear it will not load balance. I think that is by virtue of you using a browser and it routing your session to the same pod. 

You can use a second browser on your computer or a second computer to validate you get sent to a different pod but I typically find a better way is to user a machine with Curl to test.

 

Test Outside the Cluster with Curl

If we just use Curl on a computer externally we can easily see the load balancing it will recognize the new session of each request and route it to a different pod

$ curl 10.204.255.51
Hello, world!
Version: 1.0.0
Hostname: hello-world-75b9d468b8-8f8kr
$ curl 10.204.255.51
Hello, world!
Version: 1.0.0
Hostname: hello-world-75b9d468b8-z72d4
$ curl 10.204.255.51
Hello, world!
Version: 1.0.0
Hostname: hello-world-75b9d468b8-rrzlp
$

 

And you can now see that from outside of the cluster all new sessions will get routed to a different pod.

 

Teardown

To clean up the containers and services used in this test you can issue the following 2 commands to delete the service and the deployment thus all the containers.

$ kubectl delete deployment hello-world
deployment.apps "hello-world" deleted
$ kubectl delete service hello-world
service "hello-world" deleted
$

And to validate the service and deployments and pods were deleted issue the following command and you should get no output like below.

$ kubectl get all | grep hello-world
$

 

And that's it, a fairly straight forward and easy way to show the load balancing both within and outside of your cluster when routing traffic to pods in Kubernetes using MetalLB. If you still have questions let me know in the comments so I can clear things up!

 

 



Comments (0)
Leave a Comment