I have couple of applications that I need to run on public. At the beginning I was running them in docker containers using a GCE instance. The problem I was facing is to update the code and push the changes into my live site.  The ideal option to manage docker containers is to deploy them using Kubernetes. When using Kubernetes you can easily deploy and update your changes to production environment. You are able to automate your entire workflow without much effort if you run your work load on Kubernetes. To deploy my docker images I used Google Kubernetes Engine because it only charge for its underling worker nodes.

Architecture

Highlevel Architecture As illustrated in the  above diagram, I commit my code to bitbucket repo, when I merge my code to the master branch ‘Cloud Build" will take care of the rest. It will build the new docker image and push it to the Google Container Registry, then will deploy the app to my GKE cluster.

Technologies I used

  • Google Kubernetes Engine
  • Terraform
  • GIT
  • Cloud Build

Google Kubernetes Engine

Kubernetes is consist of master and worker nodes, where master nodes are responsible for administering the kubernetes, and your application containers will run on worker nodes. Google Kubernetes Engine provide master nodes for free, that means you only have to pay for your underline worker nodes.

In addition to that GCP  provides preemptible VMs (Short lived, low cost vms). **The beauty of preemtible vms are it is 80% cheaper than regular instances.**These preemtible instances will last of 24 hours and will terminate a preemptible instance for a system event is generally low, but might vary from day to day and from zone to zone depending on current conditions.

The GKE cluster I created is a public GKE cluster with one worker node. I use g1-small compute instances (5.11$ per month).  I was running my GKE cluster for last 2 weeks and I still done’t see that my VM is deleted due to lack of resources.(** I am using g1-small for my worker node)

Terraform.

I use terraform to provision my Kubernetes cluster. You can use terraform to build and version your infrastructure safely and efficiently. First create a GCP storage bucket to store the terraform state file.  I created two IAM service accounts in GCP,one for terraform state backend ([email protected]) and other for provision resources ([email protected]) with following permission.

Roles : gcs-bucket@<‘your-project-id’>.iam.gserviceaccount.com

  • Storage Admin

Roles : terraform@<‘your-project-id’>.iam.gserviceaccount.com

  • Compute Admin
  • Kubernetes Engine Admin
  • Service Account User

Then I created a VPC with a subnet for my GKE nodes. Please find the below link for the details of the terraform code for GKE and VPC.  I created the VPC and Subnet by using the google terraform modules.

Terraform Storage Backend

    terraform {
      backend "gcs" {
        bucket = "<your-gcs-bucket-name-id>"
        prefix = "terraform-state"
        # Project is not needed since backend is already created
        project = "<your-project-id>"
        credentials = "/patth/to/gcs-bucket@<your-project-id>.iam.gserviceaccount.com"
      }
    }

Creating VPC

    module "vpc" {
      source  = "terraform-google-modules/network/google"
      version = "~> 1.0.0"

      project_id   = var.project_id
      network_name = var.network_name
      routing_mode = "GLOBAL"

      subnets = [
        {
          subnet_name           = "gke-nodes"
          subnet_ip             = "10.8.100.0/23"
          subnet_region         = var.region
          subnet_private_access = "true"
        }
      ]



      secondary_ranges = {
        "gke-nodes" = [
          {
            range_name    = "gke-pods"
            ip_cidr_range = "10.220.0.0/15"
          },
          {
            range_name    = "gke-services"
            ip_cidr_range = "10.80.100.0/24"
          },
        ]
      }

    }

Creating GKE Cluster

    #------------------------------------------------------------------------------
    # Terraform module for create GKE clusters with managed nodepools
    #------------------------------------------------------------------------------

    resource "google_container_cluster" "gke" {
      provider           = "google-beta"
      name               = "my-infra-gke"
      network            = var.network_name
      subnetwork         = var.gke_subnet_name
      min_master_version = var.master_version
      location           = var.region
      node_locations     = var.node_locations
      project            = var.project_id

      # Setting an empty username and password explicitly disables basic auth
      master_auth {
        username = ""
        password = ""

        client_certificate_config {
          issue_client_certificate = false
        }

      }

      # node pool and immediately delete it. To manage it seperately
      remove_default_node_pool = true
      initial_node_count       = 1

      # From which networks that master should accessible
      master_authorized_networks_config {
        cidr_blocks {
            cidr_block = "0.0.0.0/0"
            display_name = "public-access"
        }
      }




      # Subranges will be created automatically /14 for pods | /19 for services
      # If not you can pass these values as variables.
      ip_allocation_policy {
        cluster_secondary_range_name  = var.ip_range_pods
        services_secondary_range_name = var.ip_range_services
      }

      ## ISTIO addon, make it false to isntall
      addons_config {
        istio_config {
          disabled = true
        }
      }


      maintenance_policy {
        daily_maintenance_window {
          start_time = var.maintenance_window
        }
      }
    }

    resource "google_container_node_pool" "core_node_pool" {
      name       = "core-node-pool"
      location   = var.region
      cluster    = google_container_cluster.gke.name
      node_count = var.core_node_pool_count
      project    = var.project_id
      version    = var.node_version


      node_config {
        machine_type = var.node_machine_type
        preemptible  = true
        disk_size_gb = var.disk_size
        disk_type = var.disk_type

        oauth_scopes = [
          "https://www.googleapis.com/auth/compute",
          "https://www.googleapis.com/auth/devstorage.read_only",
          "https://www.googleapis.com/auth/logging.write",
          "https://www.googleapis.com/auth/monitoring.write",
          "https://www.googleapis.com/auth/monitoring",
          "https://www.googleapis.com/auth/cloud_debugger",
          "https://www.googleapis.com/auth/cloud-platform",
          "https://www.googleapis.com/auth/service.management.readonly",
          "https://www.googleapis.com/auth/servicecontrol",
          "https://www.googleapis.com/auth/trace.append",
        ]

        labels = var.node_labels
        tags   = var.node_tags
      }

      management {
        auto_upgrade = var.nodes_auto_upgrade
        auto_repair  = var.nodes_auto_repair
      }
    }

**You can find my terraform code for creating GKE cluster and VPC in here. **

How?

  1. Make sure you have already have terraform (version > v0.12.6) installed in your machine
  2. Create GCS bucket
  3. Create two service accounts as above
  4. Update the terraform code with your bucket name,project name, and path to your downloaded service account json. And then do a terrafrom apply

In the next post I will discuss about how to make use of Cloud Build to deploy my app to this GKE cluster.