Unpacking Docker For Beginners🐳

Unpacking Docker For Beginners🐳

Hey Everyone, welcome to the new blog. In today's blog we are going to learn about Docker. So without wasting time let's get started.

Introduction to Docker

What is Docker?

Key benefits of using Docker

Understanding Docker Basics

Containers vs. Virtual Machines

Docker Architecture

Client

Docker users interact with Docker through the docker client. This client uses the Docker API to communicate with the Docker daemon. Notably, a single client can connect to multiple daemons. When you execute a docker command in the terminal, the client translates it into a request using the REST API and sends it to the daemon for processing.

The primary function of the docker client is to facilitate the direction of pulling the images from the registry of the docker and their running on the host machine of the docker. The common commands, which are executed by the clients, include docker build, docker pull, and docker run.

Docker Daemon

At the heart of Docker's workings is the Docker daemon. Essentially, it forms the service to orchestrate all the activities on container lifecycle management. That essentially means the docker daemon is supposed to take up the responsibility of different tasks like creation, execution, and monitoring containers. In brief, it acts like a bridge in between the client and the docker engine using the Client-Server Architecture. Here, the commands issued by the client are interpreted by the docker daemon as some actionable operations to be performed within the docker environment.

Docker Images

A Docker image is essentially a file containing the instructions and files necessary to create a Docker container. Images are read-only templates that can be shared and duplicated, much like snapshots in other VM environments.

Dockerfile

The Dockerfile uses DSL (Domain Specific Language) and holds instructions for the creation of a Docker image. The Dockerfile will outline the steps needed to produce an image very rapidly. During your application creation process, you are supposed to make a Dockerfile in the proper sequence because the Docker daemon runs all the instructions in top to bottom order.

REST API

It is a programming interface used to talk to the docker deamon and provide instructions. You can use this api to create your own differnt docker tools.

Docker CLI

And this is CLI that we use so far in order to apply action. It actually uses REST API to communicate to the docker daemon. So we have to remember point here: you don't really need docker-cli to be situated on the very same host it may happen to be anywhere else on separate computer, though still to continue communication with distant docker-engine. To do the latter just invoke

$ docker -H=<remote-docker-engine>:<port-number>

#example with nginx remote docker engine
$ docker -H=10.123.2.1:2375 run nginx

Build Your Dockerfile

Every step in the docker file acts as a layer for next step while building the image out of it. Let's take a simple docker file as an example.

FROM node:alpine
COPY . ./
RUN npm install
CMD ["npm", "start"]

This is a dockerfile for a basic nodejs api script.

  • FROM : This refers to the base os image that your application must be executed on. This is one of the mandatory fields you have to include in your dockerfile.

  • COPY : This is a simple copy command that has 2 arguments before it. 1 - the files you want to copy, in this case all so . and where you want to copy so in this case ./ root directory.

  • RUN : You include a command here that you wish to run while building the container. Here I have done npm install because well I require node_modules folder to be able to run my nodejs application.

  • CMD : This is where you place the command you want to execute once the container is created and configured. You can continue adding the commands in this array. If it's just one command then you don't necessarily have to put it in the array format.

Basic Docker Commands

Status Checking Commands

#Check docker version
$ docker -v

#See all the images in docker
$ docker image ls #[OR]
$ docker images

#See the runnig containers
$ docker container ls

#See all the containers
$ docker container ls -a

#See the running docker processes
$ docker ps

#See all the docker processes
$ docker ps -a

#Inspect a docker container
$ docker inspect <container-name>/<container-id>

#Inspect a docker image
$ docker inspect <image-name>/<image-id>

#See all the networks on host
$ docker network ls

Execution Commands

# Pull an image from dockerhub
$ docker pull <image-name>

# Run the image (if not present on device then it will pull and run)
$ docker run <image-name>

# Run the image with specific version
$ docker run <image-name>:<version>

# Run the image with attached linux command (Command comes after image name)
$ docker run <image-name> echo hello

# Run the image in interactive mode
$ docker run -it <image-name>

# Run the image in detached mode (in background)
$ docker run -d <image-name>

# Run the appliation running in the container into your machine with port forwarding
$ docker run -d -p <local-port(eg: 8080)>:<service-default-port> <image-name>
eg:
# You can now acces ngix on https://localhost:8080
$ docker run -d -p 8080:80 nginx

# Start Container
$ docker start <container-name>/<container-id>

# Stop Container
$ docker stop <container0name>/<container-id>

# Remove image
$ docker rmi <image-name>/<image-id> -f

# Remove container
$ docker rm <container-name>/<container-id>

# Get the logs of the running container
$ docker logs <container-id>

Building and Deploying Commands

# Build your on docker file
$ docker build -t <image-name>:<image-version> <dockerfile-path>

# Push the image to docker hub
$ docker push <hub-user>/<repo-name>:<tag>

Networking in Docker

When you install docker it create 3 networks automatically.

Bridge

  • It is the default network that becomes attached with the container.

  • It is a private and internal network created by docker on host.

  • Every container gets a internal ip address and containers can access each other using this internal ip address.

  • To make any container accessible from outside you must map the port of the container to docker host ports.

  • You can have more than one web container on a single docker host and on a single port.

None

  • In this category the container is not linked to any network.

  • This implies that it has no links with outside networks or other containers within the same docker host.

  • And the container will run in totally isolated network.

  • Use the following command to bind this network to your container.

$ docker run <image-name> --network=none

Host

  • To access the container externally you can attach the host network parameter to the container and it takes out any network isolation between docker host and docker container.

  • So, if you have web app container with port 5000 then it can be accessed at same port expertally without the port mapping.

  • Now this means you can't have multiple containers running on the same docker host and on the same port as port are now common to all the containers on the host network.

  • Use the following command to add this network to your container.

$ docker run <image-name> --network=host

eg of the host: Taking same example as nginx from above

# with port mapping http://localhost:80
$ docker run -d -p 80:80 nginx

# without port mapping, host networking http://localhost:80
$ docker run -d nginx --network=host

User-Define Networks

When we make containers on the same docker host there is only 1 bridge created that connects all the containers by default. (eg: 172.17.0.1) and you want to make a new bridge perhaps with ip (182.18.0.1) on the same host you have to do it through thte command.

$ docker network create \
    --driver bridge \
    --subnet 182.18.0.0/16
    custom-isolate-network

Docker Compose

Container Orchestration

With simple docker run command you create instance of your application but that's just one instance what if you wanted to create multiple instance. In that case you will have to run docker run command multiple time. Not just this you will have to closly monitor the health of each container and spin up new instance if one already running likely goes down. And what about the health of docker host? What if the docker host goes down, in that case the containers hosted on that host becomes inaccessble too. Orchestration is set of tools and scripts that helps us to host the containers in the real production environment. Generally Container Orchestration solution have multiple docker hosts that can host containers. So even if one fails, application is still accessible to others. The following is the command used in docker swarm.

$ docker service create --replicas=100 nodejs

There are several orchestrations solutions out there these days.

  • Docker Swarm

  • Kubernetes

  • Mesos

Well we went pretty deep into the docker details ranging from basic commands to architecture but that's all for today's blog.

References

Thank you for reading..!✨

Like | Follow | Subscribe to the Newsletter.