Docker

[box]Dockerizing an application is the process of converting an application to run within a Docker container. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package. [/box]

[box]Images and containers[/box]

Fundamentally, a container is nothing but a running process, with some added encapsulation features applied to it in order to keep it isolated from the host and from other containers. One of the most important aspects of container isolation is that each container interacts with its own private filesystem; this filesystem is provided by a Docker image. An image includes everything needed to run an application – the code or binary, runtimes, dependencies, and any other filesystem objects required. Dockerizing an application is the process of converting an application to run within a Docker container.

In general, the development workflow looks like this:

  1. Create and test individual containers for each component of your application by first creating Docker images.
  2. Assemble your containers and supporting infrastructure into a complete application.
  3. Test, share, and deploy your complete containerized application.

Docker containers are building blocks for applications. Each container is an image with a readable/writeable layer on top of a bunch of read-only layers. These layers (also called intermediate images) are generated when the commands in the Dockerfile are executed during the Docker image build.

$ docker history [image-name] => show all the intermediate images

Docker Cache  : Each time Docker executes an instruction it build a new image layer. The next time, if the instruction doesn’t change, Docker will simply reuse the existing layer. This helps to make our build much faster.
In some cases ( such as apt-get update ), it’s possible that the cached image be out of date. In this cases we can use –no-cache=true in the docker run command.

[box]Installing Docker on the CentOS[/box]

$ sudo yum install -y yum-utils
$ sudo yum install -y yum-utils
$ sudo yum-config-manager –add-repo  https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install docker-ce docker-ce-cli containerd.io
$ sudo systemctl start docker
$ sudo systemctl enable docker
$ sudo docker run hello-world
$ ls -l /var/lib/docker 

$ docker ps => show all running containers

[box]Build and run your image[/box]

General rules for building containers :

  • use root privilege to build containers, non-root users to run
  • Choose standard base images
  • Pull specific tag when pulling an image
  • Use ENV instructions to store useful information
  • Restrict container to one process ( no extra access like ssh )
  • Don’t use / ( root) as your build directory ( because it make all the files accessible to the docker command and it’s not good )

Let us download the node-bulletin-board example project. This is a simple bulletin board application written in Node.js.

After downloading the project, take a look at the file called Dockerfile in the bulletin board application. Dockerfiles describe how to assemble a private filesystem for a container, and can also contain some metadata describing how to run a container based on this image.

Now that you have some source code and a Dockerfile, it’s time to build your first image, and make sure the containers launched from it work as expected.

Docker will use latest tag as a default tag when no tag is provided. Images which are tagged latest will not updated automatically when a newer version of the image is pushed to the repository.

 => list all available images
You’ll see Docker step through each instruction in your Dockerfile, building up your image as it goes. If successful, the build process should end with a message Successfully tagged bulletinboard:1.0.

Run the following command to start a container based on your new image:


Syntax: $docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]

-v => Mount a volume to a container
-t => Attach a pseudo-tty to a container
-i => Make container interactive
-m => constrain memory for a container
-c => increase container’s CPU priority
–name => set a specific container name
-e => set env variables that the process running in the container can see
-h => set hostname within the container
-p => map container ports to host ports
-P => publish all ports in the container into the host
-d / –detach => asks Docker to run this container in the background

Dockerfile

Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build ( $ docker build . )users can create an automated build that executes several command-line instructions in succession.

Here is the format of the Dockerfile:

The instruction is not case-sensitive. However, convention is for them to be UPPERCASE to distinguish them from arguments more easily.
These instructions can go in your Dockerfile file :
FROM : identifies the base image ( must be first instruction)
MAINTAINER : identifies author in Author field for the image
RUN : executes a command while building an image
Each RUN command will execute the command on the top writable layer of the container, then commit the container as a new image. It is recommended to chain the RUN instructions in the Dockerfile to reduce the number of image layers it creates.
CMD : sets command to execute when the container starts
EXPOSE : exposes the container ports to the host at runtime
ENV : sets environmental variables to pass to the runtime command
ADD : copies files into the container
ADD instruction can not only copy the files but also allow you to download a file from internet and copy to the container. ADD instruction also has the ability to automatically unpack the compressed files.
The rule is that use COPY for the sake of transparency, unless you’re absolutely sure you need ADD.
ENTRYPOINT : sets container to run as chosen executable

VOLUME : mount the storage form the host to the container
USER : identifies the user assigned to running the container
WORKDIR : sets current working directory for RUN, CMD and ENTRYPOINT commands run in the container
ONBUILD : used when building base images, to add instructions a subsequent build would use to add code or data to the image that were not available during the initial base build
LABEL : adds metadata to an image (ex: LABEL version=”1.0″)

[button link=”https://docs.docker.com/engine/reference/builder/” color=”orange” newwindow=”yes”] Read more about how to create a Dockerfile [/button]

An example Dockerfile:
FROM node:current-slim
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
EXPOSE 8080
CMD [ “npm”, “start” ]
COPY . . => copy all other files to the working directory

Commit changes made in a docker container

When working with Docker images and containers, one of the basic features is committing changes to a Docker image. When you commit to changes, you essentially create a new image with an additional layer that modifies the base image layer.

example:
$ docker run -it debian:jesie
–# apt-get update && apt-get install -y git
–# exit
$ docker commit [container_id] [repository_name:tag]

Docker Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

The features of Compose that make it effective are:

Using Compose is basically a three-step process:

  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
  3. Run docker-compose up and Compose starts and runs your entire app.

Installing docker-compose:

A Sample compose file:

[button link=”https://docs.docker.com/compose/compose-file/” color=”orange” newwindow=”yes”] Read more about compose-file[/button]

[box]Docker Registry[/box]

Docker registry is a storage and distribution system for named Docker images. The same image might have multiple different versions, identified by their tags.

A Docker registry is organized into Docker repositories , where a repository holds all the versions of a specific image. The registry allows Docker users to pull images locally, as well as push new images to the registry (given adequate access permissions when applicable).

By default, the Docker engine interacts with DockerHub , Docker’s public registry instance. However, it is possible to run on-premise the open-source Docker registry/distribution, as well as a commercially supported version called Docker Trusted Registry . There are other public registries available online.

Use cases for running a private registry on-premise (internal to the organization) include:

  • Distributing images inside an isolated network (not sending images over the Internet)
  • Creating faster CI/CD pipelines (pulling and pushing images from internal network), including faster deployments to on-premise environments
  • Deploying a new image over a large cluster of machines
  • Tightly controlling where images are being stored

Running a private registry system, especially when delivery to production depends on it, requires operational skills such as ensuring availability, logging and log processing, monitoring, and security. Strong understanding of http and overall network communications is also important.

Some vendors provide their own extensions of the open source Docker registry. These can help alleviate some of the above operational concerns:

  • Docker Trusted Registry is Docker Inc’s commercially supported version, providing high availability via replication, image auditing, signing and security scanning, integration with LDAP and Active Directory.
  • Harbor is a VMWare open source offering which also provides high availability via replication, image auditing, integration with LDAP and Active Directory.
  • GitLab Container Registry is tightly integrated with GitLab CI’s workflow, with minimal setup.
  • JFrog Artifactory for strong artifact management (not only Docker images but any artifact).

Run a local registry

$ docker run -d -p 5000:5000 --restart=always-v /mnt/registry:/var/lib/registry--name registry registry:2
  1. Pull the ubuntu:16.04 image from Docker Hub.
    $ docker pull ubuntu:16.04
    
  2. Tag the image as localhost:5000/my-ubuntu. This creates an additional tag for the existing image. When the first part of the tag is a hostname and port, Docker interprets this as the location of a registry, when pushing.
    $ docker tag ubuntu:16.04 localhost:5000/my-ubuntu
    
  3. Push the image to the local registry running at localhost:5000:
    $ docker push localhost:5000/my-ubuntu
    
  4. Remove the locally-cached ubuntu:16.04 and localhost:5000/my-ubuntu images, so that you can test pulling the image from your registry. This does not remove the localhost:5000/my-ubuntu image from your registry.
    $ docker image remove ubuntu:16.04
    $ docker image remove localhost:5000/my-ubuntu
    
  5. Pull the localhost:5000/my-ubuntu image from your local registry.
    $ docker pull localhost:5000/my-ubuntu

[box]Docker container links[/box]

Container Linking allows multiple containers to link with each other. It is a better option than exposing ports.

$ docker run -d -p 5000:5000 –link redis dockerapp:v0.3

The main use for docker container links is when we build an application with a microservice architecture , we are able to run many independent components in different containers. Docker creates a secure tunnel between the containers that doesn’t need to expose any ports externally on the container.

 

[box]Networking[/box]

One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads.

Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:

  • bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate. See bridge networks.
  • host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. host is only available for swarm services on Docker 17.06 and higher. See use the host network.
  • overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers. See overlay networks.
  • macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack. See Macvlan networks.
  • none: For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services. See disable container networking.
  • Network plugins: You can install and use third-party network plugins with Docker. These plugins are available from Docker Hub or from third-party vendors. See the vendor’s documentation for installing and using a given network plugin.

[box]Monitor your Docker system[/box]

$ docker info => System-wide Docker information

$ docker version => Docker version information

$ docker top CONTAINER => Show info on docker processes

$ docker diff CONTAINER => Show container file system changes

$ docker history [OPTIONS] IMAGE => Show container build history

$ docker events => Get real-time events from the server

Leave a Reply

Your email address will not be published. Required fields are marked *

Proudly powered by WordPress | Theme: Looks Blog by Crimson Themes.