Kubernetes 101: Part 13

Designing a Kubernetes cluster

If you goal is to make something for education purpose, a solution based on minikube or a single node cluster deployed using kbeadm on local VMs /GCP/AWS should do.

If the goal is development/testing, multi node cluster with a single master workers can be a good solution.

For hosting production based one, we need multi node cluster with multiple master nodes.

Storage

Nodes

Master nodes

Typically this is how a master node looks

but in large cluster, we can keep etcd on a different node

Choosing Kubernetes Infrastructure

Kubeadm tool can be used to create single/multiple node

Minikube launches single node on a cluster

Generally these are used for dev/test

But for production, we have Turnkey solutions and Hosted solutions

Some Turnkey solutions are OpenShift,Cloud Foundry Container Runtime, VMware Cloud PKS, Vagrant

Hosted Solutions

High availability

For high availability, we should have multiple master node

2 master nodes are there . If one is not active, another works

Also when etcd is within the master node, there is a risk during failures

If we want to keep it risk free, we can keep it on a different node.

But we have to make sure that we add the location of it on the apiserver.service file

ETCD

In various cases, we can keep our etcd data in multiple different servers to make it highly available.

Here, only 1 server have the real write access. If any other server gets write request, it’s passed to the server who actually has write access

Number of nodes better for ETCD are:

3,5,7 instances or nodes are better for etcd management

Security in Docker

Here the host device is running it’s own processes. Also, in the middle we have started a container running for a 1 hour.

Note: Containers are not completely isolated from the host. They share the same kernel.

Containers are isolated using namespace in linux. The host has a namespace and container has it’s own namespace. All the processes run by the containers are actually run on the host but in their own namespace.

Now, if we want to see list of running processes from the container, we can just see 1 process (sleep). Why? Because the container can only see within it’s namespace.

But if we list it from the host, we can see all the processes. It’s like parental access. Parent can see what their child watches.

Docker Users

By default docker runs commands for the root user. It can be for the container or the host device. Both times, we can see these for the root user

If we want to run commands for other users, we need to specify using docker run —user=<user-id> command

or, while creating an image using dockerfile, we can set the user ID

then once we run the container, we will run the commands for the user.

Note: The root user on the container isn’t the root user on the host.Docker uses Linux capabilities to do this.

You can control and limit capabilities made for a user.

If you want to give additional capabilities, use docker run —cap-add <capability> container_name

So far we learned that we can add users or capabilities for our containers on docker

Kubernetes follow the same concept.

If there are rules set for the pod and container, the settings of the container will overwrite the settings of pod

How to set rules for the pod level?

We can add user rule in the pod level but not the capability

How to set for the container level?

We can add both user and capability rule in the container level.