Docker 1.13 Experimental features

Docker 1.13 version got released last week. Some of the significant new features include Compose support to deploy Swarm mode services, supporting backward compatibility between Docker client and server versions, Docker system commands to manage Docker host and restructured Docker CLI. In addition to these major features, Docker introduced a bunch of experimental features in 1.13 release. In every release, Docker introduces few new Experimental features. These are features that are not yet ready for production purposes. Docker puts out these features in experimental mode so that it can collect feedback from its users and make modifications when the feature gets officially released in the next set of releases. In this blog, I will cover the experimental features introduced in Docker 1.13.

Following are the regular features introduced in Docker 1.13:

  • Deploying Docker stack on Swarm cluster with Docker compose.
  • Docker cli with Docker daemon backward compatibility. This allows newer Docker CLI to talk to older Docker daemons.
  • Docker cli new options like “docker container”, “docker image” to collect related commands in docker sub-keyword.
  • Docker system details using “docker system” – This helps in maintaining Docker host for cleanup and to get Container usage details
  • Docker secret management
  • docker build with compress option for slow connections

Following are the 5 features introduced in experimental mode in Docker 1.13:

  • Experimental daemon flag to enable experimental features instead of having separate experimental build.
  • Docker service logs command to view logs for a Docker service. This is needed in Swarm mode.
  • Option to squash image layers to the base image after successful builds.
  • Checkpoint and restore support for Containers.
  • Metrics (Prometheus) output for basic container, image, and daemon operations.

Experimental Daemon flag

Docker released experimental features prior to 1.13 release as well. In earlier release, users needed to download a new Docker image to try out experimental features. To avoid this unnecessary overhead of having different images, Docker introduced a experimental flag or option to Docker daemon so that users can start the Docker daemon with or without experimental features. With Docker 1.13 release, Docker experimental flag is in experimental mode.

By default, experimental flag is turned off. To see the experimental flag, check Docker version.

Server:
 Version:      1.13.0
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:50:17 2017
 OS/Arch:      linux/amd64
 Experimental: false

To turn on experimental mode, Docker daemon needs to be restarted with experimental flag turned on.

Experimental flag in Ubuntu 14.04:
For Ubuntu 14.04, Docker daemon options are specified as part of Upstart system manager. This is how I enabled experimental mode in Ubuntu 14.04:
Change /etc/default/docker:

/etc/default/docker:
DOCKER_OPTS="--experimental=true"

Restart Docker daemon:

sudo service docker restart

Check that experimental mode is turned on by executing “docker version”:

Client:
 Version:      1.13.0
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:50:17 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.0
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:50:17 2017
 OS/Arch:      linux/amd64
 Experimental: true

Experimental flag in Ubuntu 16.04:
For Ubuntu 16.04, Docker daemon options are specified as part of systemd system manager. This is how I enabled experimental mode in Ubuntu 16.04:
Edit docker.conf:

# cat /etc/systemd/system/docker.service.d/docker.conf 
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --experimental=true

Restart Docker daemon:

sudo systemctl daemon-reload
sudo systemctl restart docker

Experimental mode with Docker machine:
There are instances where we need to create Docker hosts using Docker machine. Docker machine can be used to create Swarm clusters for development as well as to create Docker hosts in any cloud provider. To set experimental mode using docker-machine, we can use the experimental option as shown below:

docker-machine create --driver virtualbox --engine-opt experimental=true test

It is very nice to have experimental mode present in default Docker image. The part that I am not sure is if the presence of experimental feature can destabilize base Docker features even if the experimental feature is turned off.

Docker service logs

Container debugging starts with looking at “Docker log” output for the specific container. Docker swarm mode along with Docker service abstraction was introduced in Docker 1.12. A single Docker service with its associated containers can be spread across multiple nodes. With Docker 1.12, there was no logging at service level. It was difficult to debug problems when there is issue at service level. Also, it is painful to look at container logs spread over multiple nodes of a single service. “Docker service logs” introduced in 1.13 provides service level logging.
Following is my 2 node Docker swarm mode cluster:

$ docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
mpa2rgbjb0b7ijqve3cvo74w3    worker    Ready   Active        
sx035ztm94x9naml9t7pqdm8g *  manager   Ready   Active        Leader

For this example, I have used the sample voting application described here. The application is deployed using Docker compose as shown below.

docker stack deploy --compose-file voting_stack.yml vote

Following are the services running:

$ docker service ls
ID            NAME             MODE        REPLICAS  IMAGE
391v9t5hub74  vote_redis       replicated  2/2       redis:alpine
e5v71i26ah0y  vote_result      replicated  2/2       dockersamples/examplevotingapp_result:after
masj0y4xp90a  vote_visualizer  replicated  1/1       dockersamples/visualizer:stable
q2pip3fudbgb  vote_worker      replicated  0/1       dockersamples/examplevotingapp_worker:latest
tgt7tx6sorje  vote_db          replicated  1/1       postgres:9.4
tmhk9k6ubjz0  vote_vote        replicated  2/2       dockersamples/examplevotingapp_vote:after

Lets check the status of the service “vote_vote”:

$ docker service ps vote_vote
ID            NAME         IMAGE                                      NODE     DESIRED STATE  CURRENT STATE          ERROR  PORTS
cjvzrzc18nta  vote_vote.1  dockersamples/examplevotingapp_vote:after  worker   Running        Running 4 minutes ago         
aqw5yysav42y  vote_vote.2  dockersamples/examplevotingapp_vote:after  manager  Running        Running 4 minutes ago

In the above ouput, we can see that the service is composed of 2 containers and that 1 container is running on manager node and another on worker node.

Lets look at service logs associated with vote_vote:

$ docker service logs vote_vote
vote_vote.1.xsn0m3al4jfz@worker    | [2017-01-21 08:17:47 +0000] [1] [INFO] Starting gunicorn 19.6.0
vote_vote.1.xsn0m3al4jfz@worker    | [2017-01-21 08:17:47 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
vote_vote.1.xsn0m3al4jfz@worker    | [2017-01-21 08:17:47 +0000] [1] [INFO] Using worker: sync
vote_vote.1.xsn0m3al4jfz@worker    | [2017-01-21 08:17:47 +0000] [9] [INFO] Booting worker with pid: 9
vote_vote.1.xsn0m3al4jfz@worker    | [2017-01-21 08:17:47 +0000] [10] [INFO] Booting worker with pid: 10
vote_vote.1.xsn0m3al4jfz@worker    | [2017-01-21 08:17:47 +0000] [12] [INFO] Booting worker with pid: 12
vote_vote.1.xsn0m3al4jfz@worker    | [2017-01-21 08:17:47 +0000] [11] [INFO] Booting worker with pid: 11
vote_vote.2.tpr51pvw9211@manager    | [2017-01-21 08:17:50 +0000] [1] [INFO] Starting gunicorn 19.6.0
vote_vote.2.tpr51pvw9211@manager    | [2017-01-21 08:17:50 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
vote_vote.2.tpr51pvw9211@manager    | [2017-01-21 08:17:50 +0000] [1] [INFO] Using worker: sync
vote_vote.2.tpr51pvw9211@manager    | [2017-01-21 08:17:50 +0000] [9] [INFO] Booting worker with pid: 9
vote_vote.2.tpr51pvw9211@manager    | [2017-01-21 08:17:50 +0000] [10] [INFO] Booting worker with pid: 10
vote_vote.2.tpr51pvw9211@manager    | [2017-01-21 08:17:50 +0000] [11] [INFO] Booting worker with pid: 11
vote_vote.2.tpr51pvw9211@manager    | [2017-01-21 08:17:50 +0000] [12] [INFO] Booting worker with pid: 12

In the above output, we can see the logs associated with both containers of the service.

Docker squash image layers

Docker container image consists of multiple layers and using Union filesystem, the layers are combined into a single image. Each line in the Dockerfile will result in a separate image layer. The sharing of image layers between different container images provides efficiencies with respect to storage. In certain scenarios, the presence of multiple image layers can add unnecessary overhead. Another use case for squashing is that some users prefer to not see the layers for security reasons. With Docker squash option, all Docker image layers are combined with the parent to reduce size of the image. There was a discussion between squashing to parent versus squashing to scratch image, the current decision is to squash to parent to allow for base image reuse. The image layers are still preserved in the cache to keep building Docker images fast in build machine.

Lets take a simple Container image built from busybox and illustrate how squash will work.

Dockerfile:

$ cat Dockerfile 
FROM busybox
RUN echo hello > /hello
RUN echo world >> /hello
RUN touch remove_me /remove_me
ENV HELLO world
RUN rm /remove_me

Lets first make a Container image with default options which does not enable squashing. We can also save the image into a tar file.

docker build  -t nosquashimage .
docker save nonsquashimage -o nonsquashimage.tar

Lets look at the image layers:

$ docker history nosquash
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
0b3e1b58bdfa        3 weeks ago         /bin/sh -c rm /remove_me                        0 B                 
2c8090cbf777        3 weeks ago         /bin/sh -c #(nop)  ENV HELLO=world              0 B                 
5e6bc1925f7d        3 weeks ago         /bin/sh -c touch remove_me /remove_me           0 B                 
a90740ad3307        3 weeks ago         /bin/sh -c echo world >> /hello                 12 B                
5b4e51667cd1        3 weeks ago         /bin/sh -c echo hello > /hello                  6 B                 
7968321274dc        4 weeks ago         /bin/sh -c #(nop)  CMD ["sh"]                   0 B                 
           4 weeks ago         /bin/sh -c #(nop) ADD file:707e63805c0be1a...   1.11 MB 

As we can see in above output, each line in Dockerfile is represented by a Docker image layer.

Following are the contents of “manifest.json” that shows the image layers. We get manifest.json after we untar “nonsquashimage.tar”.

$ cat manifest.json  | jq .
[
  {
    "Layers": [
      "59c553be1ded32f51e74244c9c54ca27050fb6b843a08b8b1edc8d7205690b7f/layer.tar",
      "8c034db2f83411ec1a40efaba29b3239845e98cd1a0d7380d7b2c71a2c9a9947/layer.tar",
      "436c6ca87d2cf07e3825b7508c4703bb8f5db8f85c60b1cef1b4677b68856021/layer.tar",
      "4c5cb32265ceca6d71152063e0de14dc915fdeddd2882f3c1171ca06327d70da/layer.tar",
      "3702c68ae35ec62ab6e2cc3bb4748a6a05511f0f6492e03361a0038af315a8b6/layer.tar"
    ],
    "RepoTags": [
      "nosquash:latest"
    ],
    "Config": "0b3e1b58bdfa5d72b218b25d01c05b718680f80a346a0d32067150bf256dc47a.json"
  }
]

We can look at the layers by inspecting the image as well.

$ docker inspect nosquash | grep -A 6 Layers
            "Layers": [
                "sha256:38ac8d0f5bb30c8b742ad97a328b77870afaec92b33faf7e121161bc78a3fec8",
                "sha256:6fad774884880a017688b2595c0f262451fd411eab78e3055bfb4f9ec2b647b2",
                "sha256:c552187b79dc6cb16254ea03a9bb1da4555d224958a8a84c390fa1271ba818d1",
                "sha256:0b33cdff4f88daba608841eb711f3aca00dd14bdce17f83ef87e7f8dc38cdc67",
                "sha256:1ef2f5783216dcaf10da9c2041002155f74044e62c6aaf150eb36e819881b776"
            ]

Lets look at layers in parent busybox image:

$ docker inspect busybox:latest | grep -A 5 Layers
            "Layers": [
                "sha256:38ac8d0f5bb30c8b742ad97a328b77870afaec92b33faf7e121161bc78a3fec8"
            ]

From above output, we can see parent busybox image has 1 layer, remaining 4 layers are from the new image we created.

To illustrate the difference when squash is enabled, lets build the image with Squash option:

docker build  --squash -t squashimage .
docker save squashimage -o squashimage.tar

If we look at Container image layer, we can see that all layers have been combined with the parent as shown below:

$ docker history squash
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
71219000b4b3        3 weeks ago                                                         12 B                merge sha256:0b3e1b58bdfa5d72b218b25d01c05b718680f80a346a0d32067150bf256dc47a to sha256:7968321274dc6b6171697c33df7815310468e694ac5be0ec03ff053bb135e768
           3 weeks ago         /bin/sh -c rm /remove_me                        0 B                 
           3 weeks ago         /bin/sh -c #(nop)  ENV HELLO=world              0 B                 
           3 weeks ago         /bin/sh -c touch remove_me /remove_me           0 B                 
           3 weeks ago         /bin/sh -c echo world >> /hello                 0 B                 
           3 weeks ago         /bin/sh -c echo hello > /hello                  0 B                 
           4 weeks ago         /bin/sh -c #(nop)  CMD ["sh"]                   0 B                 
           4 weeks ago         /bin/sh -c #(nop) ADD file:707e63805c0be1a...   1.11 MB           

If we look at the size of squashed image, we can see that it is smaller than unsquashed image:

$ ls -l */*.tar
-rw------- 1 sreeni sreeni 1341952 Jan 22 22:09 nosquash/nosquashimage.tar
-rw------- 1 sreeni sreeni 1327616 Jan 22 22:09 squash/squashimage.tar

If we look at layer output, we can see that there are only 2 layers present:

$ cat manifest.json |jq .
[
  {
    "Layers": [
      "4a34e9cef720c233c8b544b494dbb553536a6b5bbf4441fb62b30b5cf2bad895/layer.tar",
      "a1cf4d557ddf29e39f0d269bb93b0132cf1a258087aa73d27daa9db06207bd7a/layer.tar"
    ],
    "RepoTags": [
      "squash:latest"
    ],
    "Config": "71219000b4b3155ed6544b2b83d0b881f91d1b14372b4f61b8674d8c1b65b6aa.json"
  }
]

We can also look at the layers by inspecting the image.

$ docker inspect squash | grep -A 3 Layers
            "Layers": [
                "sha256:38ac8d0f5bb30c8b742ad97a328b77870afaec92b33faf7e121161bc78a3fec8",
                "sha256:c552187b79dc6cb16254ea03a9bb1da4555d224958a8a84c390fa1271ba818d1"
            ]

From above example, we can see that nonsquash image has 5 layers, out of which 1 is the parent image. squash image has 2 layers, out of which 1 is the parent image. We have compressed the 4 layers into 1 using the squash option.

The “docker history” output and “docker inspect” shows different number of layers since history shows all commands as seperate layers while some of the commands gets combined into a single layer. To check the actual number of layers, we would need to use “docker inspect” or look at “manifest.json”. In the above example, we have seen “missing” tag in docker history output. The “missing” tag is not an issue. Its because post Docker 1.10, imageid and layerid means different things and imageid is not preserved if the image is not built locally. There is a detailed blog that explains this very clearly.

Checkpoint and restore

Checkpoint and restore feature allows for persisting Docker Container runtime state. This is different from Docker persistent storage using Volumes. Volumes are used for persisting files and databases. Checkpoint and restore feature allows to persist process state within Container. This allows for preserving container state when host is rebooted, container is moved across hosts or when container is stopped and restarted.
Checkpoint and restore Docker experimental feature uses a tool called CRIU (Checkpoint restore in User space). It is needed to install CRIU to try out this feature.

I used the following steps to install CRIU in Ubuntu 16.04:

apt-get update
apt-get install libnet1-dev
git clone https://github.com/xemul/criu
sudo apt-get install --no-install-recommends git build-essential libprotobuf-dev libprotobuf-c0-dev protobuf-c-compiler protobuf-compiler python-protobuf libnl-3-dev libpth-dev pkg-config libcap-dev asciidoc
apt-get install asciidoc xmlto
cd criu
make
make install

To Check that CRIU is installed correctly, we can try the following:

# criu check
Warn  (criu/autofs.c:79): Failed to find pipe_ino option (old kernel?)
Looks good.

To illustrate the feature, we can start a busybox container that is running a loop and printing numbers. When we use checkpoint and restore, we can see that the container starts from the state where it is left off:

docker run --security-opt=seccomp:unconfined --name cr -d busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'
docker checkpoint create cr checkpoint1
docker start --checkpoint checkpoint1 cr

We can use docker logs to confirm that the restarted container starts from the saved state.

For VMs, Vmotion is a very important feature which allows VM movement without stopping the VM. Its not clear if seamless Container movement across hosts is important as its expected that applications using containers are expected to spawn new containers to handle failure and not needing to preserve runtime state within the container. There are still scenarios where checkpoint and restore functionality for Containers can be useful.

Docker metrics in Prometheus format

Prometheus is an open source monitoring solution. In this experimental feature, Docker has added metrics Prometheus output for basic container, image and daemon operations. There are many more Container metrics that will be exposed in future.

There are 2 components to get Prometheus working. First is node exporter. This runs in each node and exports metrics in prometheus format. Second is Prometheus server that reads the metrics from each node and crunches the data into meaningful content. Prometheus can integrate with other monitoring visualizers like Grafana where Grafana can read data exported by Prometheus servers.

I used the following systemd conf file to enable Prometheus in my Ubuntu 16.04 system:

# cat /etc/systemd/system/docker.service.d/docker.conf 
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --experimental=true --metrics-addr=0.0.0.0:4999

Docker daemon has to be restarted after this.

sudo systemctl daemon-reload
sudo systemctl restart docker

Following is a sample output out of the metrics endpoint that is exposed on port 4999 in the host machine.

# curl localhost:4999/metrics | more
# HELP engine_daemon_container_actions_seconds The number of seconds it takes to process each container action
# TYPE engine_daemon_container_actions_seconds histogram
engine_daemon_container_actions_seconds_bucket{action="changes",le="0.005"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="0.01"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="0.025"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="0.05"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="0.1"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="0.25"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="0.5"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="1"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="2.5"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="5"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="10"} 1
engine_daemon_container_actions_seconds_bucket{action="changes",le="+Inf"} 1

To export this data into Prometheus, lets start Prometheus container with the following configuration file. In the below config file, we specify the nodes that Prometheus needs to scrap the data and some options. First node is the local host and the second node is the Container metrics endpoint that we have exposed on the localhost.

# A scrape configuration scraping a Node Exporter and the Prometheus server
# itself.
scrape_configs:
  # Scrape Prometheus itself every 5 seconds.
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

  # Scrape the Node Exporter every 5 seconds.
  - job_name: 'node'
    scrape_interval: 5s
    static_configs:
      - targets: ['139.59.56.66:4999']

At this point, we can start the Prometheus Container using above config file:

docker run -d -p 9090:9090 -v ~/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus -config.file=/etc/prometheus/prometheus.yml -storage.local.path=/prometheus -storage.local.memory-chunks=10000

Following picture shows the targets served from prometheus endpoint on port 9090. 1 of the target is the host itself and the other is the Docker metrics endpoint.

docker_exp1

Following picture shows the count of Docker daemon events like container create, delete etc in CLI format.

docker_exp2

Following picture shows the count of Docker daemon events like container create, delete etc in GUI format.

docker_exp3

For much more fancy Dashboards, we can use Grafana and connect it to Prometheus. In the above example, we have used Prometheus to monitor standalone Docker node. Prometheus can also be used with Docker swarm clusters. Following blog covers an approach to integrate Prometheus with Docker Swarm where Prometheus can be used to monitor all nodes in Docker Swarm cluster.

At this point, the Prometheus metrics supported by Docker are minimal. There is a plan to support metrics for all Docker subsystems in future.

References

7 thoughts on “Docker 1.13 Experimental features

  1. FYI, for a more general way to enable experimental features (works on systemd)

    add/edit the

    /etc/docker/daemon.json

    and include:

    {
    “experimental”: true
    }

    $ sudo service docker restart

    $ docker system info

    you should then see
    ….
    Experimental: true
    ….

  2. this is my script to install docker and enable experimental features on a aws ubuntu 16.04 instance:

    curl -fsSL get.docker.com -o get-docker.sh
    sudo bash get-docker.sh
    sudo sed -i -e ‘s/DOCKER_OPTS=/DOCKER_OPTS=”–experimental=true”/g’ /etc/init/docker.conf
    sudo service docker restart
    sudo docker version

Leave a comment