top of page
Writer's pictureKeet Malin Sugathadasa

An Introduction to Docker - Guide for Absolute Beginners


Today I thought of writing a little bit about docker and dive deeper into the inner concepts of Docker. The reason is that many people know what docker really is, but cannot distinguish, why we need docker for some tasks and why we don't.


Also some of the concepts in docker could be a bit confusing, and I intend to explain each concept simply for the readers to understand. I will go as far as C-Groups and exclude container orchestration for the next blog. The contents of this blog are as follows.

  • Why do we need Docker

  • What are container

  • Docker Hub

  • Containers vs Images

  • Docker in the DevOps culture

  • Creating my own docker image

  • Docker layered architecture

  • Networking in Docker

  • Storage in Docker

  • Docker Enginer

  • Docker Control Groups

  • Docker Commands

 

Why do we need Docker?


The Matrix from Hell


Assume you want to develop your own application stack which runs on different run-time environments. When you start building an end to end application, you will surely need a web server (assume Node JS) and you need a Database (assume Mongo DB). You might need other platforms like Redis for caching purposes. But we really need to ensure whether all these services are compatible with the underlying Operating System that we are planning to use. Sometimes you will face situations where the underlying OS is not compatible, and you will have change the stack or OS as required. I have faced situations where I had to downgrade a service and upgrade another so that both are compatible with the same OS to run together.





It's not just about the underlying Kernel. It's also about the libraries that we run on top of. There are instances where one service requires one version of a dependent library and some other requires another version. Whenever we are planning on changing components, we will have to go through the process of checking compatibility of the underlying infrastructure. This is what is commonly known as the Matrix From Hell.




If you are using cloud platforms like AWS where they provide different managed services for Databases and Applications, you will not face this issue. With the Matrix From Hell, the developers and engineers have to come across various issues and concerns. Whenever a new developer comes in, the following concerns will get raised.

  • Setting up a new environment following a large set of instructions and dependencies

  • Long set up time

  • Different Dev/ Prod/ QA environments

  • How to ship the application from one environment to another

So this is why, us as developers need something to help with the compatibility issues. Something that will allow us to modify and change services without affecting the other components. This is where Docker comes into action.


With Docker, we can run each component in a separate container, with its own dependencies and its own libraries. To get this into speed irrespective of the Operating System, all we have to do is to install Docker on top of the OS.



 

What are Containers


Containers are a completely isolated environment which can have their own processes, network interfaces and their own mounts, just like Virtual Machines. The difference between virtual machines and dockers is the fact that Docker containers share the same OS Kernel.


Types of Containers


Docker is just another example for containers. Containers existed since a long time ago, and a few from that classification is given below with references. Explaining these in this blog is beyond the expected scope, and I have attached a reference for anyone who is interested.


  • LXC - Linux Containers

This is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. Docker utilizes LXC containers. (Read more)


  • LXD

LXD is the new LXC experience. It offers a completely fresh and intuitive user experience with a single command line tool to manage your containers. (Read more)


  • LXCSF

LXCFS is a simple userspace filesystem designed to work around some current limitations of the Linux kernel. (Read more)


Docker Shares the same Kernel


Every operating system has its own kernel. We install Docker on top of this, and the Docker Containers will start utilizing the underlying OS Kernel. The system that hosts the containers is called the Docker Host. The kernel allows the containers to interact with the hardware, whilst maintaining its own environments and software within the specific container.


All docker containers are supported as long as the underlying OS kernel is compatible with the container you are trying to run. You cannot run a Windows based docker containers on a Linux Docker Host. For this, you need a Windows Based Docker Host.



One thing to understand is, Docker is not meant to hyper-vise and run different Operating Systems on the same Kernel. This is what differentiates docker from virtual machines. The main purpose of docker is to package, share and ship applications that can run anywhere anytime. The difference between containers and virtual machines, is illustrated in the diagram below.




 

Docker Hub (Docker Store)


Most of the applications we require and use in our day to day systems are containerized and readily available in a public docker repository called the Docker Hub. This is a platform used by organizations to share docker containers across multiple systems and developers. It could be a public docker store or a private store.


If you wish to run a container that is readily available in the Public Docker Registry, you can simply run the following command.


>> docker run fedora


And the output will look like this.


 

Containers vs Images


A docker image is a package or a template of a give application with its environment. A container is a running instance of that image.


Many day to day uses of systems are already dockerized and available as Images. All you have to do is to use one for your purpose. If not, you could always create your own image, push it to docker hub repository and use it as containers.



 

Docker in the DevOps Culture


Developers generally develop the application and provide it to the Operations team to deploy in production. The necessary instructions are passed along by the Developers to the Operations team and there is always a friction between the two, when trying to deploy a service into production.


This is where docker brings out the DevOps culture by creating an Image of the service and shipping it to the Ops team for deployments. This is just a matter of running a simple command to get the server up and running. The Ops team does not have to worry about creating environment, installing dependencies and what not.



 

Creating my own Docker Image


When creating your own Docker Image, you are responsible to set up your own environment by installing the required dependencies and libraries. Given below is a step by step approach to build your own docker image. In this example, we will take a simple Python Flask server to be containerized into an Image.


Step 1: Create a folder with your source files


Have a folder with all the source files you require for your program to run, All of the files in this folder will be pushed into the docker container as it is. But it will be copied to a folder we specify in the Dockerfile


Step 2: Create a Dockerfile


This file will list the instructions required to run your program. This will include the installation of the libraries and dependencies required for your program to function. Your Dockerfile will look like this.


The below image shows the basic structure of a Dockerfile.




Step 3: Now build the Dockerfile


You can specify any name as the tag for the image. This is done by specifying the "-t" flag. This command will create an image locally on your system.


>> docker build -t 'ubuntu-sleeper' .


Step 4: Push your image to DockerHub


This will push your image to the Docker public repository. If you wish to send to a private repository, log in to docker hub using your credentials and then push the image by specifying the username.


>> docker push 'ubuntu-sleeper'



 

The Docker Layered Architecture


When docker builds images, it builds them in a layered architecture. Each line or instruction in the docker file creates a new layer in the Docker image with just the changes from the previous layer.

Each layer will only store the changes from the previous layer. So, each layer is responsible for its own size. Run the following command to see the layers in your docker image.


>> docker history 'imageName'


When the docker build command is run, each layer gets built one by one and gets cached. This layered architecture will help you restart the build process from a certain point, if the build fails. Only the remaining layers need to be rebuilt.


The biggest advantage is, if you have two docker files which share certain instructions, the second docker file can directly use the already built layers of the first docker file.


 

Networking in Docker


When you install docker on your machine (host), it creates three networks automatically. You can specify the network configs when running the Docker Run command


>> docker run ubuntu

>> docker run ubuntu --network=none

>> docker run ubuntu --network=host

  • Bridge (The default network the container gets attached to)

  • None

  • Host


Bridge Network


This is a private internal network created by docker, on the host. All containers in the host, gets attached to this network by default and gets an internal IP in the range of 172.17.x.x series.


If you wish to connect to a container externally, you will have to map the ports from the container to a port available on the host.


How to create multiple internal bridge networks?


By default, docker only creates in Bridge network for all containers. You can do this by creating your own network and assigning containers to it. Use the following command to create a network.


>> docker network create --driver bridge --subnet 180.10.0.0/16 any-customer-name


You can view all available networks by running this command.


>> docker network ls


You can view the network details of a container by running the inspect command.


>> docker inspect containerID/containerName


Host Network


If you run the container in the Host network mode, this takes out any isolation between the Docker container and the Docker host. If we run the container on Port 5000, this can be accessed using the Host's network and same Port.


None Network


The containers are not attached to any network. Each container is isolated on its own.


Embedded DNS in Docker


Container can reach each other using their container names. No need to refer to each container by its internal IP. If I have a MySQL server running, I can connect to it from another container using the container name of the MySQL server container.


The advantage of using the container name is that, in case the host restarts or the container restarts, the internal IP addresses may change.


 

Storage in Docker Containers


To understand this, you need to clearly understand the Docker layered architecture. When a docker image is built, it builds a set of Image Layers, which is Read-only. If you want to run a container, the already built Image Layers will be used. Docker uses Storage Drivers to maintain the Layered Architecture.


Docker will choose a suitable Storage Driver based on the underlying Kernel and availability.




If you wish to change these Image Layers, you will have to rebuild the Docker file again, by changing the instructions in the Docker file.


Any in-memory files created after running the container, is stored in the Container Layer, which is also called the Read-Write Layer.. When the container is stopped, the Container Layer will be destroyed. If we wish to persist the data in a container, we will have to create and attach volumes.


Docker Volumes


You can create a volume in Docker using the below command. The volume gets created on the Docker Host.


>> docker volume create any_preferred_name


Then you can run a container by mounting this volume to the Read-Write Layer (Container Layer) of the container.


>> docker run -v volume_name:/var/lib/mysql mysql


There are two types of Mounting in Docker.


  • Volume Mounting: This will mount a Docker volume to a container.

  • Bind Mounting: This will mount a directory on the Host to a container.


 

Docker Compose


Docker Compose is a command used to bring up an entire application stack, with just one go. If I had a complete system, which container a Server, Redis, MySQL etc, I would have to run Docker Run on each of these images and get the entire application stack working. I might have to do the proper linking, and proper networking one by one as intended.


Explaining what happens in docker compose up command, is beyond the scope of this blog. This part is just to get an understanding for the beginners to know that there is an option like this if interested.


 

Docker Engine


If you are installing Docker on a machine, youa re actually installing three different components.

  • Docker Deamon: backgroud process that manages docker objects

  • REST API: interface that programs can use to talk to the docker deamon

  • Docker CLI: command line interface used by users.




You can have your Docker CLI on another machine, and access the Docker Engine of another machine via the REST APIs. You can use the following command for it.





>> docker -H=remote_docker_engine_ip:port


 

Docker Control Groups


The underlying Docker Host and the running containers, share the same resources such as CPU and Memory. By default, there is no restriction as to how much resources a container can use. If a container goes rogue, the entire host machine might get corrupted.


Docker can use C-Groups (Control Groups) to restrict the amount of Hardware resources that can be used by a container.


>> docker run --cpus=.5 ubuntu


The above command will ensure that the container will not exceed the CPU resources of the host over 50% in consumption.


>> docker run --memory=100m ubuntu


The above command will ensure that the container will not use over 100 MB of memory from the Host resources.


 

Most used Docker Commands


Feel free to run each of the below commands and to get a hands on experience.


Run a docker container


This will run a docker container using an image. As an example the following command will install an instance of Nginx using the Nginx image. First it will see whether the docker image is saved in the host machine. If not, it will fetch the latest image from the docker hub. For the subsequent executions, the same image will be used.


>> docker run <imageName>


Running a different container version using Tags


If you do not wish to run the latest version of an image, and wish to run an older version, you can specify it with a TAG after a colon. If you don't specify a tag, docker will consider it to be the latest TAG. (redis:latest)


>> docker run redis:4.0


List all running containers


This command will list all the running containers. Each container's ID, the auto assigned name, and other details will be visible.


>> docker ps


If you give the "-a" flag, it will show all the containers which are currently running as well as the ones previously stopped.


>> docker ps -a


Stopping a container


Provide the container ID or the auto-assigned name to stop the container, with the following command.


>> docker stop <containerID / containerName>


Remove the docker container permanently


This command will remove the docker container details permanently. After running the "docker ps -a" command, the container details will no longer be visible.


>> docker rm <containerID / containerName>


View the list of images downloaded locally


This command will list the images saved locally.


>> docker images


Removing images saved locally


>> docker rmi <imageName>


Pull an image from DockerHub and save locally


>> docker pull <imageName>


Executing a command on a docker container


>> docker exec <containerName / containerId> <command>


Eg:

>> docker exec dummy_container cat /etc/hosts


Running a container in the Background Mode


The container will continue to run in the background.


>> docker run -d <containerName / containerId>


Running a container with a PORT Mapping


Every docker container running on a docker host, will get an internal IP assigned. According to the below image, it is 172.17.0.2. This container is exposed by port 5000. If you wish to open a browser and go to this IP and PORT, you will be able to access the running container.


But what if you wish to access the container from outside? Then you will have to connect to the IP address of the Docker Host. In this example, it is 192.168.1.5. You will also have to map the container port to an available port on the Docker Host. This is done by the below command.


>> docker run -p 80:5000 <dockerImage>


Mapping an External Volume to a Container


If you intend to run a MySQL DB on your container, the data will be stored within the container. When you remove the container, the data will get lost. This is where you can map the container volume to a directory outside of the container and inside of the Docker Host.


>> docker run -v /opt/datadir:/var/lib/mysql mysql



Inspect more details about a container


This returns all details of a container in a JSON format. This will return all the port configurations including the set environment variables as well.


>> docker inspect <containerName / containerID>


View the logs of a container


>> docker logs <containerName / containerID>


Run a container by passing an Environment Variable


Running a docker image by passing a value for the environment variable named: APP_ENV_NAME.


>> docker run -e APP_ENV_NAME=value <dockerImage>



851 views0 comments

Recent Posts

See All

Comments


bottom of page