In this post, I am planning to discuss what are the things that we want to decide when creating a GKE cluster for your production environment. There are several flavours of settings you can select when creating a GKE cluster. I came across these settings with the help of my friend Chanux, and you can visit his blog for more awesome stuff.

Cluster Types

There are two major types of GKE clusters; Regional GKE cluster and Zonal GKE clusters. The main difference between those clusters are Regional GKE clusters have multiple master nodes, and Zonal cluster only has a single master node.

What’s the difference?

Regional clusters are high available GKE clusters which have three master nodes created in 3 zones (within the same region). That means if one zone went down, you could still do cluster operations. In addition to that during cluster upgrades, you can perform your deployments etc. Regional GKE However, zonal clusters only have one master nodes; that means during cluster upgrades, you cannot do you cluster operations, master API will be not available during that time. Zonal GKE Therefore, try to create Regional clusters if possible. So there won’t be downtime for your master nodes.

Worker Nodes

When creating a node pool, make sure you are using at least two zones for your worker nodes. Then you will have your worker nodes spread among the two zones (two physical data-centres) in the same region. If one zone goes down, your services will be able to run from the nodes in the other zone.

Private vs Public Clusters

You need to consider this based on how you want to secure your cluster. First, we’ll look at public clusters.

Public clusters mean your GKE cluster master nodes and worker nodes will get publicly accessible IP addresses.

Private clusters have two stages; one is private masters; where your master nodes will get a private IP address. In this scenario, you cannot access master API via the internet. You only can access master API within your VPC network.

The other stage is the private worker nodes. That means your worker nodes will only get a private IP (which is from your VPC). Therefore, nobody can access your worker nodes from outside.

So what we should use?

During my previous experiences, I always advise going with** Public GKE masters** with Private GKE worker nodes. I prefer public master nodes because sometimes you need to access your master plain (master API) outside from your VPC cluster. For example, you may need to access your GKE cluster from pubic GitHub or Bitbucket. Therefore, I always use public masters. BUT, I make sure I use “master authorize networks” which I will discuss this below.

Another problem we face when we use private worker nodes is that the pods you deployed cannot access the internet, because GKE nodes do not have a public IP. To overcome this issue, we use NAT service. We create a Cloud NAT and allow these nodes to use the Cloud NAT service to communicate with the internet.

Master Authorized Networks

I always use this when creating GKE cluster with public masters. In this, you can provide which CIDR ranges that allow accessing your GKE master API. It would be best if you did not allow others to access your GKE master nodes. Therefore use this feature to secure your clusters. Master Authorized Network

VPC Native

When creating a GKE cluster makes sure you enable VPC native, this helps you to  route traffic between pods using alias IP ranges. There are several benefits when using VPC native cluster. Some of them are:

  • Pod IP addresses are natively routable within the cluster’s VPC network and other VPC networks connected to it by vpc  peering
  • Pod IP ranges do not depend on custom static routes. Instead, automatically-generated subnet routes handle routing for VPC-native clusters.
  • You can create firewall rules that apply to just Pod IP ranges instead of any IP address on the cluster’s nodes.
  • Nodes in a VPC-native cluster have anti-spoofing enabled. VPC networking performs anti-spoofing checks which prevent nodes from being able to send packets with arbitrary source IP addresses. (Anti-spoofing is disabled for routes-based clusters.)
  • Pod IP ranges, and subnet secondary IP ranges in general, can be shared with on-premises networks connected with Cloud VPN or Cloud Interconnect using Cloud Routers.
  • Alias IP ranges allow Pods to directly access hosted services without using a NAT gateway.

So make sure you select VPC native option when creating your production GKE cluster. VPC Native

What about subnets?

When creating GCP resource best practice is to maintain your own VPC network and create subnets for your resources. For GKE you need few ranges of subnets. You need CIDR range for master nodes, worker nodes, services, pods.

Master Nodes Subnet

You must assign /28 CIDR range for your cluster masters.  Usually, I would assign something like this for cluster masters 172.16.0.32/28.  (Please note that you must plan your CIDR ranges before you creating any resources in your VPC network. In a later post I will show you how I planned CIDR ranges for some of my clients)

Worker Nodes Subnet

When allocating CIDR range for your worker nodes, you must consider several things. You must decide how much nodes and how much pods you may expect to run in your GKE cluster. You must design base on your future requirements because you cannot change these settings after you create your GKE cluster.

There are two main things to consider when selecting CIDR range for the worker nodes; how many nodes you are planning to have in your cluster, how many pods per node, how many total pods and services you are planning to have in your cluster.

For worker nodes CIDR range I would suggest to use /22 range, that means you will have 1024 IPs; 1000+ worker nodes. Maintaining a 1000 nodes cluster is a massive scale job, so I would say that number make sense to me. But if you think you will provision more than 1000 nodes design your subnet for GKE worker nodes based on your requirement.

Number of Pods and Services

Usually, services mean how many microservices you will have in your cluster. You need to assign a CIDR range for your services. By default, GKE creates a secondary range for services with /20 CIDR range. That means you can have 4096 services create in your cluster. You can adjust this value base on your requirement to save your IP space.

For pods default value is /14 that means you can have 262,144 addresses in your cluster. But in real-world, you are not going to run that much of pods. Therefore I suggest you reduce that range according to your future requirements.

The other value you need to consider is no of maximum pods per one worker node. The default value is 110 pods. Base on this value, your number of worker nodes will depend. There is a formula to calculate no of max pods. We’ll discuss it later. What I want to highlight is that you might not be going to run 110 pods in one worker nodes. Therefore, as a best practice adjust it to a reasonable value. I would go with 50 pods per node.

So my final calculation will be as follows

50 pods per nodes

/20 for services = 4096 IPs

/15 for pods      = 131,072 IPs

/22 for Worker nodes       = 1024 IPS

Node Pool Settings

When creating node pools, you need to select several settings. One is the node image, which is the underline OS that runs in your worker nodes. I advise you to use default one ( Container-Optimized OS) unless if you have any other particular reason to use something else. COS is the one which is recommended by Google.

For the boot disk type use SSD for your production environment. It is fast and suitable for better IOPS. Make sure you select the disk size to match you required IOPS for your applications running on GKE. Usually, I would start with 200GB for production node pool.

There are few more things we fine tuned when running Kubernetes in our pThere are a few more things we fine-tuned when running Kubernetes in our production environments. I am planning to discuss them in a later post.  Thanks for reading. See you again. Cheers..!

Banner  Photo by Allen Ng on Unsplash