Kubernetes 101 Security : Part 15

Securing Kube-api server

Kube-api is the center of everything with which we contact using kubectl. So, we need to ensure the security of kube-api server

So, we have to ensure these 2 things:

Who can access kube-api server?

People who have correct username and password, token, certificates or, external authentication providers can be used, service accounts can be created for machines etc.

What can they do?

The access to permissions etc can be using Role based access control (RBAC), Attribute based access control (ABAC), node authorization, webhook etc.

Also, all communication should be secured with TLS certificates.

Securing Controller Manager & Scheduler

Controller manager handles tasks like ensuring nodes are healthy, maintaining the right number of pod replicas, managing service accounts with controller like replication controller and other controllers etc.

Scheduler looks for available resources and decides the best node for each pod.

So, how to safe them?
We can isolate them from the remaining working pods.

In this way, if a hacker hacks one of the working nodes, the scheduler and controller are still safe.

Also, we can use RBAC to limit what controller manager and scheduler can do. As scheduler is supposed to allocate pod to appropriate node and controller manager controlling the replica set etc, we can just give them access to do these tasks.

In this way, if a hacker gets access to them and want to do tasks like look for secrets or others, they can’t do it.

Also, to communicate between scheduler and controller manager, we can ensure TLS certificates in between them. In this way, hacker can’t prevent the communication.

We can also, enable audit logs for controller manager and scheduler. So, if anything occurs, you can refer to the audit and find the culprit.

Tools like Prometheus, Grafana can alert if any suspicious activity happens

Securing Kubelet

kubelet contacts kube-apiserver and if it gets hacked, that’s a big big issue.

Kubelet registers a node, once needed to create a pod, it pulls the image information and creates it, also it monitors the node and pod.

From the kubelet service, (on release 10), most files has been moved to kubelet-config.yaml file

So, now in the service, we pass the path

Once the kubelet is running, if you want to see current configuration of kubelet, use this

Kubelet has 2 ports and by default it gives access to anonymous access to the API . One can check the running pods on the node.

You can also see the system logs

So, that’s a big issue. Anyone who has the IP address can access to the port and check these.

So, how to solve it?

We have to use authentication and authorization as mentioned earlier.

While authenticating,

we have to first ensure that we set anynomious-auth/anonymous as false. We can either do it manually from service or through yaml file.

Now anonymous persons can’t get information. But what about people who are the admin?

For that, we can use certificates & api bearer tokens

We can set the certificate in the service or yaml file

So, if you are the admin, you have the key. You will pass that and you can use that.

Also, remember that kube-api server contacts with kubelet to do most tasks. So, kubeapi-server also saves the certificate and key as the kubelet-client-certificate and kubelet-client-key

Thus making the connection between kubeapi-server and kubelet safer.

Now talking about authorization,

By default the authorization is always allowed.Surely, this is not what we want.

We have to set it to webhook. Once done, the kubelet will send every request to kubeapi-server to check if that’s authenticated or not.

Then the request will be approved or rejected.

Also, keep in mind that the metrics from kubelet can be checked as that’s on the read-only mode.

This can only be undone, if we change the read-only-port to something else than the 10255 port.

We can set ti to 0.

Securing Container runtime

Kubernetes supports various container runtimes (docker, containerd, crio-o)

These are one of the top vulnerabilities for containers.

How to solve them?
Update regularly specially patches.

Also, we should not allow container to be accessed with root user privilege. So, we can set the user id and group id so that, they can use it

Another thing we can do is, we can make container file system readonly. In this way, hacker can’t make changes to the container even if they get access to the container.

Also, we should limit the container resources so that hacker can’t misuse resources if they get access.

We can also use Security profiles to limit actions that can be performed. There are SELinux and AppArmor which can help us on that.

Finally, we can use tools like fluentd, prometheus, grafana, logstash, elasticsearch. Fluentd and elasticsearch for centralized logging, Prometheus or Grafana for runtime behaviour monitoring helps detect and respond to security incidents promptly.

Securing Kube-proxy

Kube-proxy runs on each and every node in kubernetes cluster maintaining network rules. It ensures that nodes can communicate with internal nodes and external resources when required.

So, how to safeguard it?

We first need to locate the kubefonfig file that the kube-proxy uses.

Now, we found the kube-proxy.config.conf file. Once we look into this, we see this

Now we see the kubeconfig file that the kube-proxy uses to communicate to the kube-api server.

Here we can check the file permission using stat -c command. We have to ensure the value is set to 644 or stricter. It means that the permission is only given to owner to write to the file when others can only read it.

We also needs to check the ownership of the file.

Here you can the output root:root which means the root user has the access only.

Finally, we also need to make sure we have secured communication between kube-proxy and kube-api server. In the kubeconfig file we can see thaat ca.crt is there to validate the kube-api server’s TLS certificate.

Kube-proxy uses service account token to authenticate to the kube-api server.

We also should enable audit logs so that we can monitor suspicious activity.

Pod security

We also need to ensure security to our pods. Let’s break down this yaml file for a pod

Here we can see that volume is mounted on the host, the container is configured to run as root user (0 is the root user id), the container has root user privileges (privileged = True), capabilities added.

These can make the pod vulnerable.

So, we need to take some actions which were earlier called as Pod Security Policy (PSP) but not called as Pod Security Admission, Pod Security Standards.

How does this work?

So, when the policies are enabled, the pod security admission controller observses all pod creation requests and validates the configuration against the set of pre-configured rules.

If it detects any violation , it rejects and gives an error message.

But how to enable the admission controller? We just need to add it (—enable-admission-plugins=PodSecurityPolicy) on the kube-apiserver.service

Once enabled, we create a pod security policy object.

This object will reject all pod creation request with Privileged set as True.

Now assume that, we want to do more customization. We don’t want any to be created as a root user, not to create any pod with privileged power etc.

Then we customize the object yaml file like this

Also, we can say which capabilities to drop and which to add etc.

For example, if we have this as our pod yaml file

The object will surely reject because the provileged is set as True, which the object wants as False, also the pod wants to run as root (id 0), which we don’t want etc, also the CAP_SYS_BOOT capability will be removed and CAP_SYS_TIME will be added etc.

So, once the policy is enabled and the object is created, we need to give the user or pod the access to pod security policy API.Otherwise, the pod security policy will reject all request.

So, how to solve it?

When a pod is created, a service account named default is assigned in that namespace.

Then we create a role and bind it to the default service account in that namespace.

Here you can see the role binding done in the default namespace having a default service account.

Once the binding is done, the admission controller can now easily work with the pod request.

Here the request was denied as the privileged was set true whereas the policy object was looking for privileged as False

If the privileged was set as False, the object would accept the request.

Securing ETCD

etcd keeps our secrets and state informtion, cluster configuration data, certificate and keys etc.

So, we must secure it.

Firstly, we have to ensure the key value it stores is encrypted.To do that, we need encryption at rest.

To do that, we create a yaml file with secrets, algorithms to use and keys to store.

Once the encryption object is created, we need to update etcd pod specification file to use the encryption.

Next, we need to secure ETCD’s communication with other stuffs. To do that, the specification file container -cert-file (specifies the server’s certificate file for secure identity presentation), —key-file, —client-cert, —trusted-ca-file, —peer-cert-file, —peer-key-file, —peer-client-cert-auth, —peer-trusted-ca-file etc.

Again, we can keep the etcd data backed up using snapshot.

Securing container networking

Pods within the nodes can contact each other without NAT as the private IP address is assigned as such.

Also, external users can access services through ingress controller.

But while doing so, we need to handle 3 network level issues.

Let’s solve that on the node level,

By default pods allow all traffic but we can stop that using network policy rules

Here all traffic (in and out) has been denied with the rule.

One issue solved.

Then we can use Linked , Istio which provides advanced networking features , includes MTLS for encrypted communication, traffic management and observability.

Finally for the security in Network layer, kubernetes supports vbarious mechanisms including IPSec or WireGuard to encrypt traffic at the network layer.

For example, if we use a CNI plugin Calico to enable IPSec; we can actually enable the encryption for traffic and ensure that all data transferred between nodes is encrypted .

Finally, we can isolate our pods/resources by namespace or network policy to make it much safer.

Kubectl Proxy servers

We know that all the certificates are kept on the kubeconfig file which allows us to know about the cluster using kube-api server.

This kubeconfig (/.kube/config) file can be in the cluster or on our local system or on vm etc.

Assuming that, our kubeconfig with necessary credentials are on the laptop and the kubernetes cluster is running on the cloud.

So, how can we access the kube-api server?

If we curl to the kube-api server on the port 6443 (where it listens), we will get rejection.

Why? Because we need to prove the kube-api server that we have proper security credentials.

So, we can provide the key, certificates , ca certificates kept in the kubeconfig file and then access the kube-api server

But it’s painful, right? How to make it easier?

We can use kube-proxy which will be in the same device as our kubeconfig file.

Here the proxy server is created on the laptop which can access the security credentials.

Now if we want to access the port on Kubectl proxy server, it sends the traffic to kube-api server with all security credentials . Then it returns the output from kube-api server

Also, assume that we now want to get access to the nginx service (triangle) . How to do that?

We have to create a port forward on our laptop and connect it to the port of nginx service

Here you can see the port-forward ‘s port 28080 was connected to the port 80 of service nginx (triangle)

Now, if we curl to the port 28080, we can get output from the nginx service

Note: If you wonder what is kubeconfig, just a reminder. While we want to some data or information like pods, we need to send key, certificates, ca certificates with the curl request.

For example, assume our cluster name is my-kube-playground where 6443 is the kubeapi server port using the address, we want to see the pods list

We can see that we have 0 pods right now.

How to do that using kubectl?

Just use get pods command and then mention the cluster name: kube-api port in the server option. Also provide admin.key, admin.crt and ca.crt

You see that, we need to provide so many things in a command. To solve this, we can keep those in a file called kubeconfig (/.kube/config) and then we don’t even need to mention them.

Everytime we run a command, by default that kubeconfig file is added and we get the result.

Here we go! Same result but we don’t need any certificates or credentials to use now.

Securing Storage

We can have multiple issues with the storage of our pods which is actually persistentvolume

Let’s solve the encryption problem,

Cloud platforms if used, gives us encryption opportunity of the data kept on the disk

To enable that, we need to create storage class enabling encryption

We can also enable IOPS limit, backup policies etc on the storage class.

Here, the iops and encryption has been setup.

What about the access control?
We can solve that by creating a role and rolebinding to allow proper users to the disk

Finally talking about the backup, we can use tools like VELERO, portworx etc.

We also should use Prometheus and Grafana to observe changes

Done for this one!