Category Archives: Containers

Compare Docker for Windows options

As part of Dockercon 2017, there was an announcement that Linux containers can run as hyperv container in Windows server. This announcement made me to take a deeper look  into Windows containers. I have worked mostly with Linux containers till now. In Windows, I have mostly used Docker machine or Toolbox till now. I recently tried out other methods to deploy containers in Windows. In this blog, I will cover different methods to run containers in Windows, technical internals on the methods and comparison between the methods. I have also covered Windows Docker base images and my experiences trying the different methods to run Docker containers in Windows. The 3 methods that I am covering are Docker Toolbox/Docker machine, Windows native containers, hyper-v containers.

Docker Toolbox

Docker Toolbox runs Docker engine on top of boot2docker VM image running in Virtualbox hypervisor. We can run Linux containers on top of the Docker engine. I have written few blogs(1, 2) about Docker Toolbox before. We can run Docker Toolbox on any Windows variants.

Windows Native containers

Folks familiar with Linux containers know that Linux containers uses Linux kernel features like namespaces, cgroups. To containerize Windows applications, Docker engine for Windows needs to use the corresponding Windows kernel features. Microsoft worked with Docker to make this happen. As part of this effort, changes were made both on Docker and Windows side. This mode allows Windows containers to run directly on Windows server 2016. Windows server 2016 has the necessary container primitives that allows native Windows containers to run on it. Going forward, Microsoft will port this functionality to other flavors of Windows.

hyper-v containers

Windows Hyper-v container is a windows server container that runs in a VM. Every hyper-v container creates its own VM. This means that there is no kernel sharing between the different hyper-v containers. This is useful for cases where additional level of isolation is needed by customers who don’t like the traditional kernel sharing done by containers. The same Docker image and CLI can be used to manage hyper-v containers. Creation of hyper-v containers is specified as a runtime option. There is no difference when building or managing containers between windows server and hyper-v container. Startup times for hyper-v container is higher than windows native container since a new lightweight VM gets created each time. 1 common question that comes up is how is hyper-v container different from a general VM with virtualbox or hyper-v hypervisor and running container on top of it? Following are some differences as I see it:

  • hyper-v container is very light-weight.  This is because of the light-weight OS and other optimizations.
  • hyper-v containers do not appear as VMs inside hyper-v and cannot be managed by regular hyper-v tools.
  • The same Docker  CLI can be used to manage hyper-v containers. To some extent, this is true with Docker Toolbox and Docker machine. With hyper-v containers, its more integrated and becomes a single step process.

There are 2 modes of hyper-v container.

  1. Windows hyper-v container – Here, hyper-v container runs on top of Windows kernel. Only Windows containers can be run in this mode.
  2. Linux hyper-v container – Here, hyper-v container runs on top of Linux kernel. This mode was not available earlier and it was introduced as part of Dockercon 2017. Any Linux flavor can be used as the base kernel. Docker’s Linuxkit project can be used to build the Linux kernel needed for the hyper-v container. Only Linux containers can be run in this mode.

We cannot use Docker Toolbox and hyper-v containers at the same time. Virtualbox cannot run when “Docker for Windows” is installed.

Following picture shows illustration of different Windows container modes

windows_container_types

Following table captures the difference between different Windows Container modes

Windows mode/Feature Toolbox Windows native container hyper-v container
OS Type Any Windows flavor Windows 2016 server Windows 10 pro, Windows 2016 server
hypervisor/VM Virtualbox hypervisor No seperate VM for container VM runs inside hyper-v
Windows container Not possible Yes Possible in Windows hyper-v container
Linux container Yes Not possible Possible in Linux hyper-v container
Startup time Higher compared to windows native and hyper-v containers Least among the 3 options Between Toolbox and windows native containers

Hands-on

If you are using Windows 10 pro or Windows server 2016, you can install Docker for Windows from here. This installs Docker CE version and runs Docker for Windows in hyper-v mode. We can install using either the stable or edge channel. Docker for Windows was available earlier only for Windows 10. The edge channel added Docker for Windows for Windows server 2016 just recently. Once “Docker for Windows” is installed, we can switch between Linux and Windows mode with just a click of a button. As of now, Linux mode uses mobyLinuxVM, this will change later to hyper-v linux container mode. In order to run Hyper-V containers, the Hyper-V role has to be enabled in Windows. If the Windows host is itself a Hyper-V virtual machine, nested virtualization will need to be enabled before installing the Hyper-V role. For more details, please refer these 2 references(1, 2). As shown in the example of reference, we can start hyper-v containers by just specifying a run-time option in Docker.

docker run -it --isolation=hyperv microsoft/nanoserver cmd

If you are using Windows server 2016, Docker EE edition can be installed using the procedure here. I would advise using Docker EE for Windows server 2016 rather than using hyper-v container.

I have tried Docker Toolbox in Windows 7 Enterprise version. Docker Toolbox can be run in any version of Windows. Docker Toolbox installation also installs Virtualbox if its not already installed. Docker Toolbox can be installed from here. For Docker Toolbox hands-on example, please refer to my earlier blog here.

I tried out Windows native containers and hyper-v containers in Azure cloud. After I created a Windows 2016 server, I used the following commands to install Docker engine. These commands have to be executed from powershell in administrator mode.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider
Restart-Computer -Force

Following are some example Windows containers I tried:

docker run microsoft/dotnet-samples:dotnetapp-nanoserver
docker run -d --name myIIS -p 80:80 microsoft/iis

Since Azure uses hypervisor to host compute VM and the fact that nested virtualization is not supported in Azure, Docker for Windows cannot be used with Windows server 2016 in Azure.
I got following error when I started “Docker for Windows” in Linux mode.

Unable to write to the database. Exit code: 1
   at Docker.Backend.ContainerEngine.Linux.DoStart(Settings settings) in C:\gopath\src\github.com\docker\pinata\win\src\Docker.Backend\ContainerEngine\Linux.cs:line 243
   at Docker.Backend.ContainerEngine.Linux.Start(Settings settings) in C:\gopath\src\github.com\docker\pinata\win\src\Docker.Backend\ContainerEngine\Linux.cs:line 120
   at Docker.Core.Pipe.NamedPipeServer.<>c__DisplayClass8_0.b__0(Object[] parameters) in C:\gopath\src\github.com\docker\pinata\win\src\Docker.Core\pipe\NamedPipeServer.cs:line 44
   at Docker.Core.Pipe.NamedPipeServer.RunAction(String action, Object[] parameters) in C:\gopath\src\github.com\docker\pinata\win\src\Docker.Core\pipe\NamedPipeServer.cs:line 140

I was still able to use hyper-v containers in Azure in Windows mode in Windows server 2016. I am still not fully clear how this mode overcame the nested virtualization problem.

From Azure perspective, I would like to see these changes from Microsoft:

  • Azure supporting nested virtualization.
  • Allowing Windows 10 in Azure without MSDN subscription.

There was an announcement earlier this week at Microsoft Build conference that Azure will support nested virtualization in selected VM sizes. This is very good.

Windows base image

Every container has a base image that contains the needed packages and libraries. Windows containers supports 2 base images:

  1. microsoft/windowsservercore – a full blown Windows server with full .NET Framework support. The size is around 9 GB.
  2. microsoft/nanoserver – a minimal Windows server and .NET Core Framework. The size is around 600 MB.

Following picture from here shows the compatibility between Windows server OS, Container type and Container base image.

baseimage

As we can see from the picture, with hyper-v container, we can use only nanoserver container base image.

FAQ

Can I run Linux containers in Windows?

  • The answer depends on which Docker windows mode you are using. With Toolbox and hyper-v Linux containers, Linux containers can be run in Windows. With Windows native container mode, Linux containers cannot be run in Windows.

Which Docker for Windows mode should I use?

  • For development purposes, if there is a need to use both Windows and Linux containers, hyper-v container can be used. For production purposes, we should use Windows native container. If there is a need to have better kernel isolation for additional security, hyper-v container can be used. If you have a version of Windows that is neither Windows 10 or Windows server 2016, Docker Toolbox is the only option available.

Can we run Swarm mode and Overlay network with Windows containers?

  • Swarm mode support was added recently in Windows containers. Multiple containers across Windows hosts can talk over the Overlay network. This needs Windows server update as mentioned in the link here. The same link also talks about a mixed mode Swarm cluster with Windows and Linux nodes. We can have a mix of Windows and Linux containers talking to each other over the Swarm cluster. Using Swarm constraints scheduling feature, we can place Windows containers in Windows nodes and Linux containers in Linux nodes.

Is there an additional Docker EE license needed for Windows server 2016?

  • According to the article here, it is not needed. It is better to check as this might change. Obviously, Windows license has to be taken care separately.

References

Kubernetes CRI and Minikube

Kubernetes CRI(Container runtime interface) is introduced in experimental mode in Kubernetes 1.15 release. Kubernetes CRI introduces a common Container runtime layer that allows for Kubernetes orchestrator to work with multiple Container runtimes like Docker, Rkt, Runc, Hypernetes etc. CRI makes it easy to plug in a new Container runtime to Kubernetes. Minikube project simplifies Kubernetes installation for development and testing purposes. Minikube project allows Kubernetes master and worker components to run in a single VM which facilitates developers and users of Kubernetes to easily try out Kubernetes. In this blog, I will cover basics of Minikube usage, overview of CRI and steps to try out CRI with Minikube.

Minikube

Kubernetes software is composed of multiple components and beginners normally get overwhelmed with the installation steps. It is also easier to have a lightweight Kubernetes environment for development and testing purposes. Minikube has all Kubernetes components in a single VM that runs in the local laptop. Both master and worker functionality is combined in the single VM.

Following are some major features present in Minikube:

Continue reading Kubernetes CRI and Minikube

Docker 1.13 Experimental features

Docker 1.13 version got released last week. Some of the significant new features include Compose support to deploy Swarm mode services, supporting backward compatibility between Docker client and server versions, Docker system commands to manage Docker host and restructured Docker CLI. In addition to these major features, Docker introduced a bunch of experimental features in 1.13 release. In every release, Docker introduces few new Experimental features. These are features that are not yet ready for production purposes. Docker puts out these features in experimental mode so that it can collect feedback from its users and make modifications when the feature gets officially released in the next set of releases. In this blog, I will cover the experimental features introduced in Docker 1.13.

Following are the regular features introduced in Docker 1.13:

  • Deploying Docker stack on Swarm cluster with Docker compose.
  • Docker cli with Docker daemon backward compatibility. This allows newer Docker CLI to talk to older Docker daemons.
  • Docker cli new options like “docker container”, “docker image” to collect related commands in docker sub-keyword.
  • Docker system details using “docker system” – This helps in maintaining Docker host for cleanup and to get Container usage details
  • Docker secret management
  • docker build with compress option for slow connections

Following are the 5 features introduced in experimental mode in Docker 1.13:

  • Experimental daemon flag to enable experimental features instead of having separate experimental build.
  • Docker service logs command to view logs for a Docker service. This is needed in Swarm mode.
  • Option to squash image layers to the base image after successful builds.
  • Checkpoint and restore support for Containers.
  • Metrics (Prometheus) output for basic container, image, and daemon operations.

Experimental Daemon flag

Docker released experimental features prior to 1.13 release as well. In earlier release, users needed to download a new Docker image to try out experimental features. To avoid this unnecessary overhead of having different images, Docker introduced a experimental flag or option to Docker daemon so that users can start the Docker daemon with or without experimental features. With Docker 1.13 release, Docker experimental flag is in experimental mode.

By default, experimental flag is turned off. To see the experimental flag, check Docker version.

Continue reading Docker 1.13 Experimental features

Docker in Docker and play-with-docker

For folks who want to get started with Docker, there is the initial hurdle of installing Docker. Even though Docker has made it extremely simple to install Docker on different OS like Linux, Windows and Mac, the installation step prevents folks from getting started with Docker. With Play with Docker, that problem also goes away. Play with Docker provides a web based interface to create multiple Docker hosts and be able to run Containers. This project is started by Docker captain Marcos Nils and is an open source project. Users can run regular containers or build Swarm cluster between the Docker hosts and create container services on the Swarm cluster. The application can also be installed in the local machine. This project got me interested in trying to understand the internals of the Docker host used within the application. I understood that Docker hosts are implemented as Docker in Docker(Dind) containers. In this blog, I have tried to cover some details on Dind and Play with Docker.

Docker in Docker(Dind)

Docker in Docker(Dind) allows Docker engine to run as a Container inside Docker. This link is the official repository for Dind. When there is a new Docker version released, corresponding Dind version also gets released. This link from Jerome is an excellent reference on Docker in Docker that explains issues with Dind, cases where Dind can be used and cases where Dind should not be used.

Following are the two primary scenarios where Dind can be needed:

Continue reading Docker in Docker and play-with-docker

Vault – Use cases

This blog is a continuation of my previous blog on Vault. In the first blog, I have covered overview of Vault. In this blog, I will cover some Vault use cases that I tried out.

Pre-requisites:

Install and start Vault

I have used Vault 0.6 version for the examples here. Vault can be used either in development or production mode. In development mode, Vault is unsealed by default and secrets are stored only in memory. Vault in production mode needs manual unsealing and supports backends like Consul, S3.

Start Vault server:

Following command starts Vault server in development mode. We need to note down the root key that will be used later.

Continue reading Vault – Use cases

Service Discovery and Load balancing Internals in Docker 1.12

Docker 1.12 release has revamped its support for Service Discovery and Load balancing. Prior to 1.12 release, support for Service discovery and Load balancing was pretty primitive in Docker. In this blog, I have covered the internals of Service Discovery and Load balancing in Docker release 1.12. I will cover DNS based load balancing, VIP based load balancing and Routing mesh.

Technology used

Docker service discovery and load balancing uses iptables and ipvs features of Linux kernel. iptables is a packet filtering technology available in Linux kernel. iptables can be used to classify, modify and take decisions based on the packet content. ipvs is a transport level load balancer available in the Linux kernel.

Sample application

Following is the sample application used in this blog:

Continue reading Service Discovery and Load balancing Internals in Docker 1.12

Comparing Swarm, Swarmkit and Swarm Mode

One of the big features in Docker 1.12 release is Swarm mode. Docker had Swarm available for Container orchestration from 1.6 release. Docker released Swarmkit as an opensource project for orchestrating distributed systems few weeks before Docker 1.12(RC) release. I had some confusion between these three projects. In this blog, I have tried to put my perspective on the similarities and differences between these three software components. I have also created a sample application and deployed it using the three approaches which makes it easier to compare.

Docker Swarm mode is fundamentally different from Swarm and it is confusing to use the same name Swarm. It would have been good if Docker could have renamed this to something different. Another point adding to the confusion is that native Swarm functionality will continue to be supported in Docker 1.12 release, this is done to preserve backward compatibility. In this blog, I have used the term “Swarm” to refer to traditional Swarm functionality, “SwarmNext” to refer to new Swarm mode added in 1.12, “Swarmkit” to refer to the plumbing open source orchestration project.

Swarm, SwarmNext and Swarmkit

Following table compares Swarm and SwarmNext:

Continue reading Comparing Swarm, Swarmkit and Swarm Mode