With Microservices in place, it is impossible to think about the packaging of an application without containers. Docker has done a decent job by providing a way to containerize the applications. But as the number of applications increase, managing applications will become complex. Here docker comes with Docker Swarm which will manage auxiliary services like auto-scaling, load balancing, and service discovery. But docker swarm will provide limited functionality and auto-scaling still a manual process
Kubernetes exactly fits the bill here. It helps us to design and build scalable, highly maintainable application which can work across all the platforms. It is the most popular tool/framework to build cloud agnostic scalable and maintainable systems.
How Kubernetes works?
Kubernete cluster consists of one or more nodes, one node will work as the master node and remaining nodes will work as slaves.
The Master node contains the following items:
- API Server: It is used as the central management entity and the only component that directly talks with the distributed storage component etcd through HTTP API using JSON
- Scheduler: It is used for the selecting nodes to run containers.
- Controller manager: It is used to run the controllers.
- Etcd: It is used as a global configuration store.
- Dashboard: It is used for managing the cluster via the web UI running on the master node.
- Kubectl: It is the command line alternative to manage the cluster. It is very powerful and supports almost all the features of a dashboard in command line form.
Slave node contains the following items:
- Container runtime: It is used for packaging or building the application(example, Docker Engine)
- Kubelet: It is responsible for starting, stopping, and managing individual containers by requests from the Kubernetes control plane.
- Kube-proxy: It is responsible for networking and load balancing
What are the basic building blocks?
A pod is the smallest deployable unit in Kubernetes. The main use of pod is to support co-related services, such as an application and its peripheral services like the cache. The pod contains one or more containers and each will find via localhost and communicate through shared memory. Containers inside a pod will share IP address and port space. Many times pod contains just one container only.
Kubernetes gives us a provision to name/label the resources for example pods. A Label is a key/value pair that can be given to any resource.
If we want to perform a certain action on a resource or group of resources, first we need to identify these resource(s). Selector generally allows us to find a resource or set of resources by providing a label or range of labels.
Controller is the main engine of Kubernetes which manages all resources and owns the responsibility to keep the cluster in the defined state. The Controller comprises multiple independent processes which will take care of the specific need.
Replication Controller: Replication controller is responsible for running the specified number of pod copies (replicas) across the cluster.
Deployment Controller: Deployment controller takes care of rolling new images into the desired environment and rolling back images as needed. The typical rolling back is putting from the current state to the previous state.
Node Controller: It continuously monitors the nodes, if any of the nodes is down or not responding it will get into action immediately.
A service is a logical group of pods which provides communication abstraction to another group of pods. As pods can die any time and new pods can start in runtime, strong coupling among a group of pods will lead to serious communication failures. Service in Kubernetes facilitates decoupling.
Cluster and Nodes:
A cluster is a group of nodes. Each node will be a physical machine or virtual machine which contains docker container runtime and kubelet service as discussed in ‘How Kubernetes Work’ section.
Kubernetes was built by Google primarily to take care of heavy production workloads for their products. Google made it opensource in mid-2015 and now it is supported by strong community along with Google.
When and Where to use?
Below table depicts the list of features which are being offered by Kubernetes and equivalent service on popular public clouds.
|Kubernetes Feature||AWS||Google Cloud||Azure|
|Auto Scaling||AWS Autoscale||Google Cloud Load Balancing||Azure Application Gateway|
|Networking||Amazon’s Virtual Private Cloud (VPCs)||Google Virtual Private Cloud||Azure Virtual Network(VNET)|
|Health checks & Resource usage monitoring||Amazon’s Cloud Watch||Google Cloud Health Check||Azure Service Health|
|Service Discovery||Amazon ECS||Google Cloud Metadata Server||Azure Service Fabric|
|Load balancing||Amazon’s Elastic Load Balancing||Google Cloud Load Balancing||Microsoft Azure Load Balancing|
|Rolling update||AWS Autoscale||Google Cloud Load Balancing||Azure Application Gateway|
|Volume management||Amazon Elastic Block Store||Google Cloud Storage||Microsoft Azure Storage|
Kubernetes is an important tool in your cloud agnostic microservices kitty. It enables you to build a portable application which will have the ability to span across multiple public clouds, which is one of the core attribute(Choose the best tool/platform for a given need) of Microservices.
I hope this article was informative and leaves you with a better understanding of Kubernetes.
At Walking Tree, we are excited about the possibilities that Kubernetes brings in. Subscribe today to our blog for more articles on this topic.