Docker Basics : Part 1
Why we need docker?
What we do to solve this issue is:
It supports the same OS
Virtual machine takes time due to OS in every VM whereas containers are lightweight
VM and containers both can work fantastically once we manage them both in an architecture
To run a container
docker run <container_name>
To check list of running containers
docker ps
To check list of exited containers
docker ps -a
To remove a container
docker rm <container_id>
To check list of all images
docker images
To remove an image
docker rmi <image_name>
To run a container in the background (detached mode)
docker run -d <container name>
example: docker run -d ubuntu
To make sure the container is not exited , make it sleep
docker <container> sleep <seconds>
example: docker run ubuntu sleep 100
To pull images and not to run
docker pull <image>
To execute a container(Make sure the container is running in the backend using docker run -d and then execute some command.)
docker exec <container ID> <commands to execute>
Example: docker run -d ubuntu sleep 100 ; docker exec 982820382a cat /etc/*release
Run different version of containers. By default it's "latest" . The versions are also called tags
docker run <container image>:<version>
To work with containers which asks for an input with a string message , use -i for interactive mode (takes input), -t (to attach to terminal and print output while asking for the input)
docker run -i <container image>
docker run -it <container image>
Assume this app takes name as input and shows output with the name
So, to get that from our container which has the same code, we use -it
Port mapping
assume that your container runs on the 5000 port of the web app container
If a user wants to get access to it, we can use the web app’s ip address and the port 5000 in it to access the content. But the web app’s will be changed once launched in a different host.
To solve this issue, we can use the host’s IP and use one of it’s IP. We can connect host’s port to the web app’s port.Here in 80:5000 , 80 is host laptop’s IP address and 5000 is web app’s IP address.
In this way, you can connect multiple containers to our host’s port
But note , you can connect 1 port of host laptop with 1 port of the container. Same port of host laptop can’t be used with others.
Volume mapping
Assume that you run the mysql server and then a container appears
all of it’s data is saved within it’s location (/var/lib/mysql)
You can add important data to this container
and sometime later feel that you want to delete the data.
Boom! all of the data will be gone.
To keep the data save from earlier, we can map our local host laptop’s storage address to the container’s address
Here our host laptop’s /opt/datadir is connected to mysql container’s /var/lib/mysql file
This way, when we add extra data , it goes to the /opt/datadir file
If we delete the container later on, the host’s file has now all of the data
To inspect all data of a container use inspect command
docker inspect <container id>
How can we create our own image?
Let’s containerize simple web app (code is kept in app.py) where flask (python based web framework ) was used
Here, we have to create a dockerfile where all of the instructions are given to run our app.py.
We are expecting to use ubuntu based OS, then we have latest updates using apt-get update, then install python to run our app.py file. Our python app worked with flask web framework and thus installed flask and flask-mysql.
Then copied this code to local device's /opt/source-code folder.
Then FLASK_APP was set as the total running path and mentioned flask and run command. (Entrypoint ensures that these commands can be run to get access to the service/app)
The format was :
Then build the image
Also, we can go into the the folder we have our Dockerfile and the run
docker build -t mmumshad/my-customer-app .
Here this means run Dockerfile and name the image mmumshad/my-customer-app
within the folder (.)
Once we build the image, it basically created some layers. Basically each layer follows the commands.
If you run the history command
, you can see the file size of ubuntu layer or others. If the image file is large (check docker images), we can modify Dockerfile and use liter base files (instead of ubuntu
, in the Dockerfile, use other lite versions ubuntu:20.04
)
All the layers build will be cached. So, if a step fails,
it will use previous layers from cache and and continue building the remaining layers.
As cached, the process becomes much faster now!
Then push to docker hub to make it available for others (dockerhubname/docker_image_name)
So, anyone can use this my-custom-app
Environment variables
Assume that this is our app.py
file where are expecting the color
variable to set value from outside (color= os.environ.get(‘APP_COLOR’)
)
Now, once we set the APP_COLOR to be blue, the app’s color turns to blue
This is how we can run our containers which expects a variable (APP_COLOR). Here, simple-webapp-color
is a container which has a variable APP_COLOR
We can set the value to blue by -e APP_COLOR=blue
. Now the app also turned blue.
So, we can change the color by just changing the value each time we run the command
You can also check all of the environment variables using docker inspect <container_id>
Then you will see them in the config section.
We can also specify our container name (--name) running it from a different image.
Let’s run a container named blue-app
using image kodekloud/simple-webapp
and set the environment variable APP_COLOR
to blue
. Make the application available on port 38282
on the host. The application listens on port 8080
.
You can verify the container with name blue-app
Why a container exits?
Here you can see the ubuntu container was run but it exited in 43 seconds. Why?
Basically VMs are designed to host Operating Systems but Container are there to do some tasks listed in the dockerfile. Once done, it exits.
For example, this is the dockerfile for the container ubuntu
Here, CMD defines what command is to run. But we have bash ( a shell type.) bash looks for a terminal to work on . If it can’t find that, it exits.
So, earlier when we launched the container ubuntu, it launched bash program. By default docker does not attach a terminal to a container when it runs and thus bash could not find a terminal and exited.
But how to add more commands other than the command mentioned in CMD?
We can add our external command in this way and that will override the command in CMD.
Assume now that you want the ubuntu to sleep for 5 seconds everytime it’s launched. So, surely you will modify the dockerfile, right?
And you gave this image name “ubuntu-sleeper”
Now, assume that you want increase the sleep time to 10 but this time outside the dockerfile.May be from the terminal.
Instead of writing docker run ubuntu-sleeper sleep 10
, we can write this
But this time we could just write the sleep seconds because we used ENTRYPOINT. This will allow the value of sleep to be taken in the terminal.
SO, sleep = 10 seconds now!
But it might happen that a person might forget to give the sleep seconds. then?
To solve this issue, we can put both ENTRYPOINT and CMD
If a person does not input any sleep seconds, the CMD value will run as the value for the ENTRYPOINT command.
But if someone provides sleep seconds, the value will be referred to ENTRYPOINT. No need to use the CMD’s value 5
Docker Compose
Rather than running all of the servers required for our project,
We can organize them in a yaml file and then run them
Let’s work on a sample voting application
One part works on voting and another part for result showcase.'
We are building a python based app which will take vote
and store in redis database.
Then it will be passed to worker which is a .net application
and update postgreSQL database.
Final app is a nodejs application which will show the vote result
Now, this is how we would do that using docker run
We can run all of the container giving them the specific name.
Note: 5000:80 means we can access the application on 5000 port in our laptop through route mapping from 80 to 5000
But we haven’t connected them , right?
Let’s connect the voting-app with - - link <name of thje redis container>:<name of the host the voting app is looking for>
This is the current vote app’s code
(Here this get_redis() is defined in the voting-app which is looking for redis database)
Both cases, it’s redis, so - -link redis:redis
Again, the result-app has this source code
Which expects a connection to a postgres database on host db
We connect the result-app and the postgres database
Here ,database name (db) : name of the db it’s trying to contact in the source code (postgre$db)
So, - -link db:db
Then we connect the worker to redis and db
Note: this is the updated code for worker and still we can see var pqsql and var redisConn
We can do the same task in docker-compose.yml file
Instead of this,
We can now make this
we followed the format
<container name>
image: <image>:<version>
ports:
- host’s port: container’s port
links:
- <name of the container it links>
Also, if we want to build an image instead of pulling an image, we can do this
We just changed the line to build: <location of the directory which has the application code and docker file with instructions to build the docker image>
For example, this can be the vote folder
Same thing can be done, for other builds
Docker has various versions and in version 2 and version 3, we see various changes like introduction to services, versions, , depends_on instead of links, etc.
Docker network
Let’s create 2 traffics. We can connect the first one (front-end traffic) with voting app and result app
And then we connect the all other to the back-end traffic
Let’s make changes to the version 2 file (all the other things like depends on , port etc are there but for simplicity, we haven’t shown them here)
So, first we write the network names
Then we will mention them where they are used
For example, the voting app (vote), and result app (result) has connection to both front end and back end connection.
But others just have connection to backend traffic.