Kubernetes: A Poetic Quest Through Container Realms

7h3-3mp7y-m4n - Aug 2 - - Dev Community

Containers are getting more and more popular these days, and everyone is talking about them. The most popular topic in the container era is none other than Kubernetes. Kubernetes is an open-source container orchestrator that automates container deployment, scaling, and administration tasks.

Kubernetes is a distributed system. It horizontally scales containers across multiple physical hosts termed Nodes. This produces fault-tolerant deployments that adapt to conditions such as Node resource pressure, instability, and elevated external traffic levels. If one Node suffers an outage, Kubernetes can reschedule your containers onto neighboring healthy Nodes.

With a wonderful tool written in Go, capable of doing most of the things—from scaling pods to managing security, from allowing network policies to keeping pods alive—there is so much it can do.

A Houdini tool that can spin up so much magic in the world of containers makes us wonder how it works internally. Does the architecture resemble a rocket engine? Do I have to be a genius to understand it? Well, the Kubernetes architecture doesn’t look like a rocket engine, and you don’t have to be a genius to understand it. I’ve got you covered, and we need to understand what Kubernetes architecture is for our certification exams.

A heartwarming Architecture diagram

An lovely architecture of kubernetes

Main Components of Kubernetes Architecture

One of the best things about Kubernetes is the way it lowers the management overhead of running tons of containers. Kubernetes achieves this by pooling multiple container computer nodes into one giant entity called a Cluster. When we deploy a workload to our Kubernetes cluster, it automatically starts our containers on one or more nodes based on the requirement. Here are the key elements of a Kubernetes cluster:

Workloads

K8s has multiple layers of abstraction that define our application. With these workload objects, it helps us take full control over management. Some of them are:

  • Pod: A Pod is a fundamental compute unit of Kubernetes. A Pod can be one or a group of containers that share the same specifications of our application.
  • Deployment: A Deployment is a resource object that defines the desired state for your application. It encapsulates the instructions for creating and managing a group of identical Pods, which collectively form your application's backend. These instructions include details such as container images, resource requirements, environment variables, and more.
  • Service: A Service is a portal through which we expose Pods to the network. We use Services to permit access to Pods, either within your cluster via automatic service discovery, or externally through an Ingress.
  • Job: A Job in Kubernetes is a way to execute short-lived, non-replicated tasks or batch jobs reliably within our cluster.

Kubernetes also offers other workload types, like DaemonSets, StatefulSets, and more.

Control Plane

In our Kubernetes (K8s) cluster, the control plane functions as the mastermind, or "The Mastermind." It serves as the central management interface, overseeing various aspects of the cluster's operations. The control plane stores the cluster's state, continuously monitors the health of nodes, and takes necessary actions to maintain optimal performance.

What’s fascinating is that actions within the control plane can be initiated either manually or automatically. This duality provides administrators with flexibility in managing the cluster, allowing for both hands-on intervention and automated responses to changes in the cluster environment.

The control plane is a foundational component that ensures the smooth functioning of our Kubernetes ecosystem, embodying the essence of control and coordination in the realm of distributed systems.

To explain further, the control plane is made up of different parts, each providing the tools needed to control the cluster, though they don’t directly start or run the containers where your applications live.

  • API Server: The API Server is the control plane component that exposes the Kubernetes API. We use this API whenever we run commands with kubectl. If we lose our API, we will lose access to our cluster.

  • Controller Manager: As the name suggests, it’s responsible for monitoring and controlling our K8s cluster. It’s like a loop that monitors the cluster and performs actions when needed. For example, when we make a deployment, we set replicas, port access, and other details. The Controller Manager keeps an eye on the deployment and manages the cluster to ensure that our Pods work seamlessly.

  • Scheduler: The Scheduler is like a project manager whose task is to place newly created Pods on the desired Nodes in our cluster. We can customize our scheduler to specify which cluster a certain Pod should use.

  • Etcd: Etcd is like a data center for K8s. It’s a distributed key-value storage system that holds every API object, including sensitive data stored in our ConfigMaps.

  • Cloud Controller Manager: The Cloud Controller Manager integrates Kubernetes with your cloud provider’s platform. It facilitates interactions between your cluster and its outside environment. This component is involved whenever Kubernetes objects change your cloud account, such as provisioning a load balancer, adding a block storage volume, or creating a virtual machine to act as a Node.

Nodes

Nodes are the physical or virtual machines that host the Pods in your Kubernetes cluster. While you can technically run a cluster with just one Node, production environments typically use multiple Nodes to allow for horizontal scaling and high availability.

Nodes join the cluster using a token issued by the control plane. After a Node is admitted, the control plane begins scheduling new Pods to it. Each Node runs various software components necessary to start containers and maintain communication with the control plane.
Why didn’t the Node join the cluster party? Because it couldn’t find its token and was left out in the cold!

Kubelet: Kubelet is the software running on each Node that acts as the control plane’s helper. It regularly checks in with the control plane to report the status of the Node’s workloads. When the control plane wants to schedule a new Pod on the Node, it contacts Kubelet. Kubelet is also in charge of running the Pod containers. It pulls the necessary images for new Pods and starts the containers. Once they’re running, Kubelet keeps an eye on them to make sure they stay healthy.
Why did the Kubelet get a promotion? Because it was great at container management and never let the Pods crash the party!🎉

Kube Proxy: The Kube Proxy component helps Nodes in your cluster communicate with each other. It sets up and maintains networking rules so that Pods exposed by Services can connect. If Kube Proxy fails, the Pods on that Node won't be reachable over the network.

Why did the Kube Proxy get grounded? Because it kept breaking up the connections!😔

Container Runtime: To run a container, we need a container runtime to start our beloved containers. The container runtime is the most popular option, but alternatives such as CRI-O and Docker Engine can be used instead.
Why did the container feel alone? Because there was no container runtime to lift its mood!🥰

Customizing Kubernetes

You heard it right—the architecture doesn’t stop here. There are many different terms we can add to our beloved cluster, like CRDs, webhooks, charts, plugins, and more.

I hope we’ve learned a lot about our lovely K8s. Some terms might sound new, but stay tuned on this journey to learn more about them!
So the main thing that strikes my mind ...
Why did the Kubernetes cluster start writing poetry?🤔
Because it wanted to orchestrate its own verse of nodes and pods!😂

. . . .
Terabox Video Player