Docker Basics: Part 2

When we download docker, we basically download these

Docker deamon basically manages the images, containers, volumes etc in the background. Docker REST API is used to talk to the API and work with the daemon.Docker CLI is the command line.

We can also use docker in a remote device and still work with the docker engine.

Use the command

Containerization

Docker uses Namespace to isolate workspace, process id, network etc by creating their own namespace thereby providing isolation between containers.

The containers do use the CPU and memory and by default there is no restriction on it.

But we can use cgroups to limit the ability of a container

Here - - cpu=.5 is used to allocated maximum 50% of the cpu for the container ubuntu and 50 MB as memory.

Docker storage

When we install docker, it stores all basically here

Within that we keep all of the files related to the containers

But how it stores them?

Let’s see how a dockerfile works

Once the dockerfile runs, it creates some layers and each take some spaces.

If we run a different dockerfile where same commands used, it uses the layers created earlier. Thus no new space is wasted

Now, talking about the access to changes

Once we build an image using dockerfile, layers are created. We can’t modify them.

But we can create containers using these layers.

But we can work with this layer because we can create or remove containers and many more!

So, we can work in the container layer but we can’t work with the image layer. So, what to do if we want to modify the files (app.py) within the image layer?

We can copy and work on the container layer but once we delete the container, it’s gone!!

To keep the data, this is what we can do is

We can create a volume within the docker’s volume folder (volumes)

We can then run the container and keep the volume of the container (/var/lib/mysql) connected on the volume we created (/var/lib/docker/volumes) just a moment back

We used -v <our volume>:<container’s volume location>

Now, if we remove the container, the data will still be there.

Assume we created another container and connect it to a volume which is already not existing (data_volume2)

In that case, docker will create the volume for us and mount the volume. This is called volume mounting

We can also store our volume outside of docker’s volume address(/data/mysql) . We just need to specify the address and mount on it

This is called bind mounting

This bind mounting can be done by this as well

or, The new way

Networking

When we install docker, Bridge, None and host is created

Once a container is created, it’s connected to the Bridge

Or, if we want to connect it to different network, we can specify that on - - network= <network name>

Bridge Network:

It is created in the docker host(laptop you run docker). It assigns the containers it’s internal IP. They can now communicate within themselves.

Host network

You can run your container basically on the device’s port rather than on the container’s own port. Previously we didn’t have this option. We had a specific port of the container where we could access the container. And then we used to mount the traffic to our local devices’s port using -p <Device’s port>:<container’s port>

But we don’t need to do that now. We can just use Device’s port to access the container.

None Network: If we connect our container to a None network, it is isolated and runs without getting connected to anyone.

User Defined Network

Assuming we are not happy with one bridge network and want another one (for example, 182.18.0.1)to connect it with our containers

We can do that too!

For example,

If we need to create a new network named wp-mysql-network using the bridge driver. Allocate subnet 182.18.0.0/24. Configure Gateway 182.18.0.1

Code:

docker network create --driver bridge --subnet 182.18.0.0/24 --gateway 182.18.0.1 wp-mysql-network

We can then create a container and connect to this network bridge

Task: Deploy a mysql database using the mysql:5.6 image and name it mysql-db. Attach it to the newly created network wp-mysql-network

Set the database password to use db_pass123. The environment variable to set is MYSQL_ROOT_PASSWORD

Code:

docker run -d -e MYSQL_ROOT_PASSWORD=db_pass123 --name mysql-db --network wp-mysql-network mysql:5.6

How to connect two containers?

Assume we want to connect our web container to mysql container.

From the web container, use mysql.connect(mysql) to connect to the mysql container.

Docker Registry

Basically docker hub and others are the registry for the images where we have the images stored.

For example, when we are pulling an nginx image,

It basically goes to docker.io/library<image name>

There are other private registry as well.

We can also work with private registry like this

Here, localhost will replaced by the IP address of our private registry.

Let practice deploying a registry server on our own.
Run a registry server with name equals to my-registry using registry:2 image with host port set to 5000, and restart policy set to always.

Note: Registry server is exposed on port 5000 in the image.

Here we are hosting our own registry using the open source Docker Registry.

Code: docker run -d -p 5000:5000 --restart=always --name my-registry registry:2

Now its time to push some images to our registry server. Let's push two images for now .i.e. nginx:latest and httpd:latest.

Note: Don't forget to pull them first.

To check the list of images pushed , use curl -X GET localhost:5000/v2/_catalog

Run: docker pull nginx:latest then docker image tag nginx:latest localhost:5000/nginx:latest and finally push it using docker push localhost:5000/nginx:latest.


We
will use the same steps for the second image docker pull httpd:latest and then docker image tag httpd:latest localhost:5000/httpd:latest and finally push it using docker push localhost:5000/httpd:latest

Container Orchestration

To manage the health, status and others of our containers all of the time, take action accordingly etc, we need something. That’s what orchestration does for us.

There are tools like Kubernetes, Docker swarm etc. which works as a container orchenstration