In my previous post we discussed about deploying a app to the created GKE cluster. Now we will focus how we can access this app using a cost effective method. Typically to access a application deployed in GKE you can use HTTP LoadBalancers, but Google LoadBalancers are pretty expensive.

As solution we are deploying a nginx pod to every node (as a Daemonset) with a config map which will contain nginx configs.

Daemonset yaml

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          hostNetwork: true
          dnsPolicy: ClusterFirstWithHostNet
          containers:
          - image: nginx:1.15.3-alpine
            name: nginx
            ports:
            - name: http
              containerPort: 80
              hostPort: 80
            volumeMounts:
            - name: "config"
              mountPath: "/etc/nginx"
            resources:
              requests:
                memory: "64Mi"
                cpu: "20m"
          volumes:
          - name: config
            configMap:
              name: nginx-conf

Configmap yaml

    ---

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: nginx-conf
    data:
      nginx.conf: |
        worker_processes 1;
        error_log /dev/stdout info;

        events {
          worker_connections 10;
        }

        http {
          access_log /dev/stdout;

          server {
            listen 80;
            server_name prageesha.com;
            location / {
              proxy_pass http://my-app.default.svc.cluster.local:80;
            }
          }

        }        

As additional configuration we set two  fields on the spec, hostNetwork: true so that we can bind the host port and reach nginx from the outside and, dnsPolicy: ClusterFirstWithHostNet so that we can reach services inside the cluster.

After we apply these yaml files we can access the app using the GKE nodes public ips. Next step is to update my DNS to these IP.  

Problem?

The problem is we wont get a static public IP for the GKE nodes because we used preemptible VMs when creating the cluster. Therefore, the IP is changing frequently, and we have to update DNS records when these IPs are changed.

My DNS is host in Cloudflare, so I need a mechanism to update Cloudflare DNS records with my GKE worker nodes public IPs. To Achieve this I use " calebdoxsey/kubernetes-cloudflare-sync" image with a CronJob. What does image does is, when it noticed that GKE nodes IP is changed it will update the Clouflare DNS records.

How to Authenticate with Cloudflare

You can go to “MyProfile” in your Cloudflare account and view your API key. Note down this API key and create a secret by using this Api Key and the email ID of your Cloudflare account. Cloudflare

    kubectl create secret generic cloudflare-galagedara --from-literal=email=YOUR_CLOUDFLARE_ACCOUNT_EMAIL_ADDRESS_HERE --from-literal=api-key=YOUR_CLOUDFLARE_GLOBAL_API_KEY_HERE

CronJob, ServiceAccount, Role yaml

    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: cloudflare-sync-galagedara
      labels:
        app: cloudflare-sync-galagedara
    spec:
      schedule: "*/10 * * * *"
      concurrencyPolicy: Replace
      jobTemplate:
        spec:
          template:
            spec:
              serviceAccountName: kubernetes-cloudflare-sync
              restartPolicy: OnFailure
              containers:
              - name: cloudflare-sync-galagedara
                image: calebdoxsey/kubernetes-cloudflare-sync
                args:
                - --dns-name=dimuthu.galagedara.com,galagedara.com,prageesha.galagedara.com,thilina.galagedara.com
                - --cloudflare-proxy=true
                resources:
                  requests:
                    memory: "10Mi"
                    cpu: "10m"
                env:
                - name: CF_API_KEY
                  valueFrom:
                    secretKeyRef:
                      name: cloudflare-galagedara
                      key: api-key
                - name: CF_API_EMAIL
                  valueFrom:
                    secretKeyRef:
                      name: cloudflare-galagedara
                      key: email

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: kubernetes-cloudflare-sync
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: kubernetes-cloudflare-sync
    rules:
    - apiGroups: [""]
      resources: ["nodes"]
      verbs: ["list", "watch"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-cloudflare-sync-viewer
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: kubernetes-cloudflare-sync
    subjects:
    - kind: ServiceAccount
      name: kubernetes-cloudflare-sync
      namespace: default

I have created a service account and grant above permission to that Service Account using ClusterRole and ClusterRoleBindings. This is to grant permission to list GKE nodes.

Why CronJob, Not a deployment?

Previously I deployed this cloudflare-sync pod using a deployment, but I noticed that sometimes it failed to update the DNS records. Therefore, as a quick workaround I decided to create a CronJob, so every 10 minutes it will check for for GKE node’s IP change and update the DNS.  (Please note that this method is not good for production grade workload)

Now you can access your application using your DNS name. When preemtible GKE nodes is re-created, DNS records are updated automatically. These methods can be use to run your test workload cheaply on GKE.

Code-base for this post