Today, I am going to show a demo app which shows you the running containers of your GKE cluster. There was a time that we need to demonstrate Kubernetes to several parties, and we find it difficult to show them graphically when we do scale up and scale down.

There is cool Hexboard app which demonstrate during a Redhat Openshift demo, but it was keep crashing when I do a refresh. However, I built small dashboard using python which shows the number of pods in a namespace, and it  keeps changing when you do scale up and scale down.

This setup has following components.

  1. Dashboard App (grid-app)
  2. App which get scale up and down (demo app)
  3. A new k8s namespace (k8sdemo)
  4. Service accounts and Roles

Namespace and Service account

We create a new k8s namespace (k8sdemo) to deploy the “demo app” and create a service account (demoadmin) and grant “cluster view” permission to that service account, so that service account can query k8s api and get the pods details. Use following yaml file and do ‘kubectl apply -f <file-name.yaml>’ to create those resources.

Do I need to use the same namespace name? Yes, because my python app is hard coded to read the pods in the ‘k8sdemo’ namespace, I could parameterized it but was bit lazy :D

    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: k8sdemo
      labels:
        name: k8sdemo

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: demoadmin
      namespace: k8sdemo


    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: demoadmin
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: view
    subjects:
      - kind: ServiceAccount
        name: demoadmin
        namespace: k8sdemo

Dashboard app (grid app) to demo pods

Use following deployment file to deploy the dashboard app, make sure you deploy this to any other namespace than k8sdemo, because if you deployed it to k8sdemo namespace pod result will be included this “grid-app” also.

How to get the token value:

Run following command to get the token value of the service account, you need this value pass to the below deployment (for K8S_TOKEN).

kubectl -n k8sdemo describe secret $(kubectl -n k8sdemo get sa | grep demoadmin | awk '{print $1}')

Token

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: grid-app
      namespace: default
      labels:
        app: grid-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: grid-app
      template:
        metadata:
          labels:
            app: grid-app
        spec:
          containers:
          - name: grid-app
            image: prageesha/k8s-demo
            env:
            - name: K8S_API
              value: "https://kubernetes.default"
            - name: K8S_TOKEN
              value: "<Token Value>"
            ports:
            - containerPort: 5000
              name: "grid-apptcp"
              protocol: "TCP"
            resources:
              requests:
                memory: "64Mi"
                cpu: "10m"
    ---

    apiVersion: v1
    kind: Service
    metadata:
      name: grid-app
      namespace: default
    spec:
      type: ClusterIP
      selector:
        app: grid-app
      ports:
      - protocol: TCP
        port: 80
        targetPort: 5000

Create grid app using above YAML file.

    kubectl apply -f k8s-demo.yaml

Running App

Expose your grid-app

In order to access this app you need to expose it as a loadbalancer, or anyother method that you use in you Kubernetes cluster to access the apps. In this case I am going to do a Port Forwarding in my service for testing.

Once you click on PORT FORWARDING you can see it will create a cloud shell and run the port forwarding command. Now you can access the application using “Open in web preview” button. You should see a demo like this. Don’t worry there is nothing because we have not created our demo app yet.

Creating our demo pod

Use following pod.yaml to showcase scale up and scale down.

pod.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: demo
      namespace: k8sdemo
      labels:
        app: demo
    spec:
      replicas: 4
      selector:
        matchLabels:
          app: demo
      template:
        metadata:
          labels:
            app: demo
        spec:
          containers:
          - name: demo
            image: k8s.gcr.io/goproxy:0.1
            readinessProbe:
              tcpSocket:
                port: 8080
              initialDelaySeconds: 10
              periodSeconds: 20
            livenessProbe:
              tcpSocket:
                port: 8080
              initialDelaySeconds: 15
              periodSeconds: 20
            ports:
            - containerPort: 8080
            resources:
              requests:
                memory: "16Mi"
                cpu: "5m"
    ---

    apiVersion: v1
    kind: Service
    metadata:
      name: demo
      namespace: k8sdemo
    spec:
      type: ClusterIP
      selector:
        app: demo
      ports:
      - protocol: TCP
        port: 8080
        name: "demotcp"
        targetPort: 8080
        nodePort: 0
    kubectl apply -f pod.yaml

When you create above deployment you must see that your grid-app is showing those pods like below. Pods are getting created

get pods

Scale up your deployment

    kubectl -n k8sdemo scale  --replicas=20 deployment/demo

get pods

Delete pods to observe the K8S Demo scale down and up

    kubectl -n k8sdemo delete pods --all

Pods deleting

Pods are getting deleted and recreated. Green ones are the running ones and the Blue ones are the ones in not running state.

Finally it settled to 20 pods because we scaled our deployment to 20 pods earlier. 20 pods

Clean Up

Delete all the resource you created using following command and yaml files.


    kubectl delete -f ./pod.yaml

    kubectl delete -f ./k8s-demo.yaml

    kubectl delete -f ./sa.yaml

Delete ALl