Kubernetes 101 Security: Part 18

Scan images for known Vulnerabilities

Sometimes a developer can find vulnerabilities and make bad use of it or they can report it to CVE (Common Vulnerabilities and Exposures) database

But which bugs are considered to added in the CVE?

For example, if a bug lets a hacker check the payroll details of all employees which the admins should have the only access. Again, bugs that allow hackers to mess up things on the production level thing.

So, depending on the impact, these bugs can be given some sort of severity score.

Let’s see an example of CVE identified bug

Here the nginx controller starts installing kubernetes packages from HTTP URL on Debian/Ubuntu system. The severity score is 7.3 here.

CVE Scanner

But what if we are running some containers and we have vulnerabilities on them? Yes, that can happen and to know that we use CVE Scanner

For example, assume you are using Envoy 1.14.2 and CVE scanner found the associate vulnerability for that CVE-2020-8663

Now you can solve this issue by either updating Envoy which has this bug resolved or remove packages which are not necessary.

Trivy is one of the CVE Scanners for containers and artifacts and suitable for CI/CD Pipeline.

Once installed, we just need to specify the image we want to check vulnerability

Here you can see 155 vulnerabilities detected.

We can download images in a tar format and then scan it

Also we need to check critical issues in the image, ignore unfixed problems etc

Some of the best practices we should have overall are

Image repository Security

Check this out

Observability in security aspect

It’s always great to observe changes on the go or live because sometimes we can even have malicious activity even after taking actions like these

So, when we can use observability tools, it can detect changes and warn us

There are tools like falco which can check the syscalls for us

But in this way, there might be too much calls. So, what we can do is, look for activities which an admin in general never does. Like removing logs, checking passwords from /etc/shadow

In this way, falco can notify and warn us during these incidents.

Falco

Let’s explore Falco. Initially let’s see what falco does actually.

Falco needs to see what system calls are coming through from the application to the Kernal Space.So, it must add something in the Kernal space.

So, we need to add Falco Kernel Module in the kernal space meaning we need to insert additional code right inside the linux code. So, some kubernetes service providers don’t allow us to do that but Falco does that though eBPF (Extended Berkeley Packet Filter). eBPF is much intrusive and safer. So, some kubernetes providers allow this method.

The system calls are then analyzed by the cystic libraries in the user space. The events are then filtered by the Falco policy engine by making use of the predefined jobs that can detect if the event was suspicious or not and send us mail/slack dm to inform.

But how to install Falco?

If we install that as a package on the linux operating system, this will also install Falco Kernel Module. In this way, it’s installed as a service. So, if anytime falco gets compromised, it gets isolated from kubernetes and it can still continue to detect and alert suspicious behaviour.

We can also install falco as a daemonset using helm charts

Once installed, we can see falco pods running on all nodes of the cluster.

How Falco detects threat?

Assuming Falco was installed as a package on the host. Let’s check that using systemctl

Now let’s create a nginx pod

On a separate terminal ssh into the node1. Then use this to inspect events that are generated by the Falco service.

Then open a shell on the nginx container and this will create some events in the falco terminal. It provides us information like container id, image used for nginx, namespace etc.

Now, if we try to see content of the /etc/shadow file where we keep the password, falco creates an event showing that this thing is sensitive

But how did falco know that this might be risky? Interesting! It knew though rules.yaml file for falco

Let’s create our rules as an example. Here container.id gives container name, proc.name gives process name, fd.name can mean the name of the file descriptor, evt.type means event type filter , user.name filters the user, container.image.repository filters the image by name.

Check other filters

The priority value can be set to various ones depending on the severity

This rule was set to get alert when someone creates a shell. But rather than creating individual rules for each shell, we can make use of a list like this

Here we have used macro as well.Check more

Service Mesh

Read the blog to get idea about service mesh

Security in Istio

Microservices must have a secured connection otherwise any hacker can modify the request before it reaches the destination.

Istio provides proper encryption for that

Also, some service may need to implement access control restrictions. Istio uses Mutual TLS for that.

It also let us audit loggs

Istio Security Architecture

In Istio architecture, we have Certificate Authority in the istiod. This is where the certificates are validated and certificate signing requests are approved. Now every time a workload starts, the Envoy proxies request the certificate and key from the istiod agent.

The configuration API server distributes all the authentication , authorization , secure naming policies to the proxies, sidecar and ingress and egress proxies work as policy enforcement points.

The certificate keys, authentication, authorization, secure naming policies are sent to these proxies all the time

Security Compliance Frameworks

There are some framework/standard set so that security can be ensured.

GDPR is for the personl information right for European eunion, HIPPA is for the health information privacy, PCI is for the privacy for Card holders, NIST is for the security against cyberattack, CIS is for the standard configuration for services like kubernetes etc.

Threat Modeling Framework

Threat modeling framework introduces how to solve Compliance Framework. Two of the famous threat modeling framework is STRIDE and MITRE attack.

STRIDE

  • S for Spoofing : When an attacker try to impersonate a legitimate user to access our frontend (internet server), we can mitigate this by authentication.

  • T for Tampering:

    This is where an attacker could attempt to alter data being processed by the backend server. This could be data in transit or, could be data at rest.We can prevent that using data integrity and encryption.

    • R for Repudiation:

      User might deny performing certain actions in our application.

      • I for Information Disclosure:

        This means hacker want to access information that he is not authorized to access.

        • D for Denial of Service

          When an attacker might try to overload our system , causing it to crash.

          • E for Elevation of Privilege

            An attacker might gain unauthorized access to higher privileged levels such as getting admin rights to our backend services.

MITRE Attack

MITRE Attack is a globally Accessible resource that documents real world tactics and techniques. This is used to help organizations build stronger defence by understanding different tactics.

That’s basically what attackers aim to do and techniques is hopw they do it. This framework for kubernetes focuses on how attackers target kubernetes cluster and it organizes their tactics and techniques into different categories. So this is how attackers enter the cluster such as exploiting weak autentication, they have you execution.

Check out the MITRE Attack Framework from blog and here

Supply Chain Compliance

Supply chain compliance involves verifying that all the external components and services you integrate into your application meet security and compliance standards.

According to the blog, supply chain focuses on four core areas (Artifact, Metadata, Attestations, Policies)

Automation and Tooling

Read the pdf and the app.

Identify security issues early in the software open lifecycle aligning with the shift left approach, meaning the more security related issues you identify towards the later part of the the cycle.The more costly and time consuming they become to fix addressing vulnerabilites early reduces rick streamlines development, ensures a smoother delivery process. So, we embed security practices into the early stages of development , teams can proactively prevent issues

WE want to establish clear policies that developers can follow such as disallowing high security vulnerabilites with available fixes, enforcing non route container images, using only approved base image.One of the tools to be used during the developed p[hase is Google’s OSS Fuzz. This helps identify bugs and vulnerabilites in open source software by performing extensive automated fuzz testing.

Also tool like, Snyk Code extension allows you to analyze your code open source dependencies and infrastructure as code.

fabric8 by Read Hat , integrates seamlessly with VS code to help enforce security and compliance right form your development environment.

Also we have tools like KubeLinter too.

After than in the Distribute phase , we build the pipelines and push them to container registry.

So, we build pipelines and then we do the quality test (app tests), then we want to ensure that the manifest files created are secure and compliant to our policies. Then we want to make sure that the registries/images are scanned for a vulnerability (security checks).

So, we are done with container manifest file checks (container manifest) , the container image is built then we do the vulnerability testing against the image (security tests), prior to pushing to container registry, we do the digital signing.

Now in the deploy phase, we have to do some pre flight checks, use observability checks and investigation using these tools.

In the runtime phase, we do manage the orchestration, service mesh, storage tool and access

So, this is how overall various tools can help us maintain the system

So, overall, we can know if we have any vulnerability on the develop phase using tools like oss-fuzz, snyk code , fabric8, KubeLInter.

Then on the Distribute phase, we build pipeline using ArgoCD, FluxCD etc, then test that using App tests, then check the container manifest using terrascan, Kubesec, Do the security tests using Trivy, Clair etc. Finally we do the signing using sigstore etc, and push the image to container registry.

Then on the Deploy phase, we do pre-flight checks, ensure observability, investigation. Finally on the Runtime, we manage orchestration, Service mesh, Storage and Access.