Kubernetes 101 : Part 1

Nodes

Nodes are physical of virtual devices which has our applications to run . Here we used two redis based application.

What if the node is down?

This way our application is down. To solve this issue, we can have a cluster of nodes. If one fails,we can run another one.

Now, to manage what node to go down, check health, manage other things, we have a master node.

Once you install, kubernetes ; there are installed

API Server: Acts as a front end for kubernetes and all contact with this to contact to kubernetes cluster.

etcd: It stores key value used by kubernetes to manage the cluster.

Scheduler: Its responsible to distribute work or containers across multiple nodes.It looks for containers and assign them nodes.

Controller: They respond when container goes down.

Container Runtime: Underline software that runs these containers. For example, docker

Kubelet: Agent that runs on each node in the cluster.

Download kubernetes

Install curl

sudo snap install curl

Install latest kubernetes version

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

Validate the binary

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"

Install kubectl

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

Validate the kubectl binary against the checksum file:

echo "$(cat kubectl.sha256) kubectl" | sha256sum --check

Test to ensure the version you installed is up-to-date:

Install using package management

snap install kubectl --classic

Verify kubectl configuration

If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster.

The connection to the server localhost:8080 was refused - did you specify the right host or port?

To solve these, install minikube first and then repeat these.

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64

Start your cluster

minikube start

But got an error.

😄 minikube v1.34.0 on Ubuntu 24.04 👎 Unable to pick a default driver. Here is what was considered, in preference order: ▪ docker: Not healthy: "docker version --format {{.Server.Os}}-{{.Server.Version}}:{{.Server.Platform.Name}}" exit status 1: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.47/version": dial unix /var/run/docker.sock: connect: permission denied ▪ docker: Suggestion: Add your user to the 'docker' group: 'sudo usermod -aG docker $USER && newgrp docker' https://docs.docker.com/engine/install/linux-postinstall/ 💡 Alternatively you could install one of these drivers: ▪ kvm2: Not installed: exec: "virsh": executable file not found in $PATH ▪ podman: Not installed: exec: "podman": executable file not found in $PATH ▪ qemu2: Not installed: exec: "qemu-system-x86_64": executable file not found in $PATH ▪ virtualbox: Not installed: unable to find VBoxManage in $PATH

❌ Exiting due to DRV_NOT_HEALTHY: Found driver(s) but none were healthy. See above for suggestions how to fix installed drivers.

So, went to this page and installed docker driver

Now, we can start minikube again

Faced an issue, use this

sudo usermod -aG docker $USER && newgrp docker

Now, use minikube start

Done!

Master vs Working Node

Master node manages the working node and working node must have some runtime (docker, rkt, crio-o) installed in it

kube-api server collects information about health and others through worker node’s kubelet

etcd collects all of the info and there are other things like controller, scheduler etc.

kubectl

It works to manage and deploy application in kubernetes.

Also to get cluster information, all nodes; we use these

Docker vs ContainerD

Docker made container management easier and kubernetes came up using docker as a runtime.

But Kubernetes user wanted to run other runtimes and thus kubernetes introduces CRI

It allowed any vendors to use other runtimes if they follow the OCI standards. OCI includes imagespec (how a image should be built) and runtimespec (how a runtime should be)

Outside of CRI, kubernetes launched dockershim to still support docker runtime

But we now have containerD which is a CRI compatible and works directly with kubernetes as all other runtimes.

From kubernetes version v1.24 the dockershim support was removes resulting in no support to run docker.

But all the image created by docker still worked as docker followed imagespec and runtimespec conditions.

So, what’s containerD?

Earlier we had docker run to run containers but once the support was removed, we now can use containerD and

ctr command. But ctr is here to debug containerD and not user friendly.

We can still pull images, run images using ctr

But in production, we don’t use it as it’s not friendly at all.

So, to solve this issue, we can use nerdctl which supports almost everything docker used to support

Here is an example of using docker and nerdctl

You can see the commands pretty same.

We have another one called crictl which interacts with CRI. So, it works across all runtimes.

It’s again developer by the kubernetes community but used for debugging purpose. It works with kubelet.

If you want to create containers using crictl, kubelet will delete them as it does not know about these information. That’s why crictl is mainly used for debugging.

You may see a lot of the commands are almost same as docker

So, prior to kubernetes 1.24 versions we had

these in the crictl default. You can see dockershim here to support docker.

But after that, default endpoint for the crictl was changed.

Users are encouraged to set the endpoints manually.

So, as a whole ctr, nerctl and crictl has these usage

Commands

To get information about nodes, use

kubectl get nodes

To get to in details information about nodes,

kubectl get nodes -o wide

Pods

Assuming docker image and can be pull down ; cluster is set and working.

We have previously learned that we put our containers within the nodes. But actually they are kept within a pod

Each node can have multiple pods and pods do carry the nodes.

We can’t add more containers in a pod (for scaling purpose) but can have multiple pods and keep container in it

Although we can have more containers in a pod (not for scaling purpose)

Here we have python application container alongside helper container.They are created together and ends together. They will share same storage, network etc together .

How to create a pod or container?

When we want to create a container nginx, we mention the image name from dockerhub and use this

Automatically a node and pod is created within that. Then nginx image gets pulled from the dockerhub and runs in the pod.

Then we check the list of pods

Let’s create a pod where nginx container will be available using nginx image

kubectl run nginx --image=nginx

Check the pods

kubectl get pods

then check detail about this pod

the IP address of the node is : minikube/192.168.49.2

The IP address of he POD is 10.244.0.3

Also, we saw

kubelet started the container nginx, creaated it in 39 seconds, pulled image nginx. Then default-scheduler assigned default/nginx to minikube.

Also we can check some more information here

This lists all the pods available and their IP addresses

We can modify existing pods using

kubectl edit pod <podname>