For folks who want to get started with Docker, there is the initial hurdle of installing Docker. Even though Docker has made it extremely simple to install Docker on different OS like Linux, Windows and Mac, the installation step prevents folks from getting started with Docker. With Play with Docker, that problem also goes away. Play with Docker provides a web based interface to create multiple Docker hosts and be able to run Containers. This project is started by Docker captain Marcos Nils and is an open source project. Users can run regular containers or build Swarm cluster between the Docker hosts and create container services on the Swarm cluster. The application can also be installed in the local machine. This project got me interested in trying to understand the internals of the Docker host used within the application. I understood that Docker hosts are implemented as Docker in Docker(Dind) containers. In this blog, I have tried to cover some details on Dind and Play with Docker.
Docker in Docker(Dind)
Docker in Docker(Dind) allows Docker engine to run as a Container inside Docker. This link is the official repository for Dind. When there is a new Docker version released, corresponding Dind version also gets released. This link from Jerome is an excellent reference on Docker in Docker that explains issues with Dind, cases where Dind can be used and cases where Dind should not be used.
Following are the two primary scenarios where Dind can be needed:
- Folks developing and testing Docker need Docker as a Container for faster turnaround time.
- Ability to create multiple Docker hosts with less overhead. “Play with Docker” falls in this scenario.
Following picture illustrates how Containers running in Dind are related to Containers running in host machine.
Dind, C1 and C2 are containers running in the host machine. Dind is a Docker container hosting Docker machine. C3 and C4 are containers running inside Dind Container.
Following example illustrates Dind:
I have Docker 1.13RC version as shown below:
$ docker version Client: Version: 1.13.0-rc2 API version: 1.25 Go version: go1.7.3 Git commit: 1f9b3ef Built: Wed Nov 23 06:24:33 2016 OS/Arch: linux/amd64 Server: Version: 1.13.0-rc2 API version: 1.25 Minimum API version: 1.12 Go version: go1.7.3 Git commit: 1f9b3ef Built: Wed Nov 23 06:24:33 2016 OS/Arch: linux/amd64 Experimental: false
Lets start Dind Container. Its needed to run this in privileged mode since its mounts system files from host system.
docker run --privileged --name dind1 -d docker:1.8-dind
We can look at Docker version inside Dind:
# docker version Client: Version: 1.8.3 API version: 1.20 Go version: go1.4.2 Git commit: f4bf5c7 Built: Mon Oct 12 18:01:15 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.8.3 API version: 1.20 Go version: go1.4.2 Git commit: f4bf5c7 Built: Mon Oct 12 18:01:15 UTC 2015 OS/Arch: linux/amd64
Even though host machine is running Docker 1.13RC version, we can test Docker 1.8.3 inside the Container using above example.
For Continuous integration(CI) use cases, it is needed to build Containers from CI system. In case of Jenkins, it is needed to build Docker containers from Jenkins master or Jenkins slave. Jenkins master or slave run as Container themselves. For this scenario, it is not needed to have Docker engine running within Jenkins Container. It is needed to have Docker client in Jenkins container and use Docker engine from host machine. This can be achieved by mounting “/var/run/docker.sock” from host machine.
Following diagram illustrates this use-case:
Jenkins runs as a Container. C1 and C2 are containers started from host machine. C3 and C4 are Docker containers started from Docker client inside Jenkins. Since Docker engine on host is shared by Jenkins, C3 and C4 are created on same host and share same hierarchy as C1 and C2.
Following is an example of Jenkins Container that mounts /var/run/docker.sock from host machine.
docker run --rm --user root --name myjenkins -v /var/run/docker.sock:/var/run/docker.sock -p 8080:8080 -p 50000:50000 jenkins
Following command shows the Docker version inside Jenkins container:
# docker version Client: Version: 1.13.0-rc4 API version: 1.25 Go version: go1.7.3 Git commit: 88862e7 Built: Sat Dec 17 01:34:17 2016 OS/Arch: linux/amd64 Server: Version: 1.13.0-rc2 API version: 1.25 (minimum version 1.12) Go version: go1.7.3 Git commit: 1f9b3ef Built: Wed Nov 23 06:24:33 2016 OS/Arch: linux/amd64 Experimental: false
1.13RC4 is Docker client version installed inside Jenkins and 1.13RC2 is Docker server version installed in the Docker host.
“Play with Docker”
The application is hosted in public cloud and can be accessed as SaaS service using the following link. The application can also be run in the local machine. Following are some capabilities that I have tried:
- Run traditional non-service based containers.
- Create Swarm mode cluster and run services in the Swarm cluster.
- Exposed ports in the services can either be accessed from localhost or can be accessed externally by tunneling with ngrok.
- Create bridge and overlay networks.
Following is a screenshot of the application hosted in public cloud where I have created a 5 node Swarm cluster with 2 masters and 3 slaves.
To create a 5 node cluster, the typical approach would be to use 5 different hosts or VMs which is a huge burden on resources. Using
Play with Docker”, we create 5 node cluster with 5 Dind containers. For non-production testing scenarios, this saves lot of resources.
Following are some limitations in the SaaS version:
- There is a limit of 5 nodes.
- Sessions are active only for 4 hours.
- The usual Dind limitations applies here.
Lets start a simple web server with 2 replicas:
docker service create --replicas 2 --name web -p 8080:80 nginx
Following output shows the service running:
$ docker service ps web ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS dmflqoe67pr1 web.1 nginx:latest node3 Running Running 56 seconds ago md47jcisfbeb web.2 nginx:latest node4 Running Running 57 seconds ago
The service can either be accessed from Dind host using curl or by tunneling the application using ngrok and accessing using internet.
Following is an example of exposing the service to outside world using ngrok:
docker run --net host -ti jpetazzo/ngrok http 10.0.15.3:8080
This will return an URL which can be accessed from internet to access the nginx service that we started earlier.
“Play with Docker” can also be installed in local machine. The advantage here is that we can tweak the application according to our need. For example, we can install custom Docker version, increase the number of Docker hosts, keep the sessions always up etc.
Following are some internals on the application:
- Base machine needs to have Docker 1.13RC2 running.
- The application is written in GO and is run as a Container.
- Dind Containers are not the official Docker Dind container. “franela/Dind” is used.
- GO container that runs the main GO application does a volume mount of “/var/run/docker.sock”. This allows Dind Containers to run in the base machine.
Following picture shows the container hierarchy for this application.
“Golang” container is in the same hierarchy as Dind Containers. Dind containers simulate Docker hosts here. C1-C4 are user created containers created on the 2 Docker hosts.
To install “Play with Docker” in my localhost, I followed the steps below:
Installed docker 1.13.0-rc2 git clone https://github.com/franela/play-with-docker.git installed go1.7 docker swarm init docker pull franela/dind cd play-with-docker go get -v -d -t ./... export GOPATH=~/play-with-docker docker-compose up
My localhost is Ubuntu 14.04 VM running inside Windows machine.
Following is the 2 node Swarm cluster I created:
$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS p6koe4bo8rn7hz3s4y7eddqwz * node1 Ready Active Leader yuk6u9r3o6o0nblqsiqjutoa0 node2 Ready Active
Following are some problems I faced with local installation:
- When I started docker-compose, the application crashed once in a while. I was able to work around this problem by restarting docker-compose.
- For swarm mode services, I was not able to access exposed service using host port number. For regular containers, I was able to access exposed host port.
I did not face the above 2 problems when I accessed the application as SaaS.
Thanks to Marcos Nils for helping me with few issues faced during my local installation.
Hi,
We published a docker-on-docker swarm environment some time ago. We use it during Mentor’s Week for advanced lab. Maybe it could be interesting http://www.hoplasoftware.com/2016/11/cluster-docker-en-swarm-mode-sobre-docker-on-docker , sorry is in Spanish.
I published this work on Docker Community (in english) some days later.
We haven’t any problem with published services.
Regards,
Javier R.
Hi Javier
Thanks for the link. I was able to read it after translating. Your approach is a simpler low overhead way to try out a Swarm cluster if Docker is installed on your node. Its nice that it is being used in your advanced lab. In “play-with-docker”, the overall app is itself is hosted as SaaS, this avoids Docker install itself.
Sreenivas
Hi,
I tried to install PWD on my local, but not working as expected. Can you please help me on the following?
1. I have the docker host running with somepublic IP and i understand the containers runs in DIND mode, but how we make this accessible on the browser?
2. For swarm mode services, I was not able to access exposed service using host port number. For regular containers, I was able to access exposed host port. – Can you brief on this?
Regards
Bharathi S
hi Bharathi
The PWD is exposed at port 80 once you start PWD. You can access it using the IP and port 80.
Regarding second question, that applies to swarm mode vs traditional way of running containers without using service concept. btw, I faced this issue only with local installation and not with SaaS case. Its possible that the issue is fixed in latest version.
regards
Sreenivas
“This will return an URL which can be accessed from internet to access the nginx service that we started earlier.”
Clicking the 8080 link will automatically open the URL in a new tab so I don’t think you’ll need ngrok.
Hi Sreenivas,
I am completely new to this DIND concept and I found a solution for the actual scenario, I am looking for. In fact, I am using Openshift which in turn calls jenkins via jenkins CI and I am using the Dockerfile for additional packages creation. Please find an example jenkins pipeline script which I wrote but, during the execution, I am getting an error: I am sending you the script as well as the error message I am getting:
Script:
#!/usr/bin/env groovy
def label = “docker”
def home = “/home/jenkins”
def workspace = “${home}/workspace/build-docker-jenkins”
def workdir = “${workspace}/src/localhost/docker-jenkins/”
def ecrRepoName = “my-jenkins”
def tag = “$ecrRepoName:latest”
podTemplate(label: label,
containers: [
containerTemplate(name: ‘jnlp’, image: ‘registry.redhat.io/openshift3/jenkins-slave-base-rhel7:v3.11’,args: ‘${computer.jnlpmac} ${computer.name}’),
//containerTemplate(name: ‘docker’, image: ‘docker’, command: ‘cat’, ttyEnabled: true),
],
) {
node(label) {
dir(workdir) {
stage(‘Checkout’) {
git branch: ‘prod-agents’, credentialsId: ‘jenkins_gitlab_user_ssh’, url: ‘git@gitlab-sdop.us.corp:sdop-group/jenkins.git’
}
stage(‘Docker Build’) {
container(‘docker’) {
echo “Building docker image…”
sh “docker build -t $tag -f jenkins-docker/Dockerfile .”
}
}
}
}
}
Error:
Error in provisioning; agent=KubernetesSlave name: docker-jqkpg-gwx1x, template=PodTemplate{, name=’docker-jqkpg’, namespace=’dev’, label=’docker’, nodeUsageMode=EXCLUSIVE, containers=[ContainerTemplate{name=’jnlp’, image=’registry.redhat.io/openshift3/jenkins-slave-base-rhel7:v3.11′, workingDir=’/home/jenkins/agent’, args=’${computer.jnlpmac} ${computer.name}’}], annotations=[org.csanchez.jenkins.plugins.kubernetes.PodAnnotation@aab9c821]}. Container jnlp. Logs: OPENSHIFT_JENKINS_JVM_ARCH=”, CONTAINER_MEMORY_IN_MB=’8796093022207′, using /usr/lib/jvm/java-11-openjdk-11.0.10.0.9-1.el7_9.x86_64/bin/java
Downloading https://jenkins-sdop.ocpapps.us.dev.corp//jnlpJars/remoting.jar …
curl: (60) Peer’s Certificate issuer is not recognized.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a “bundle”
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn’t adequate, you can specify an alternate file
using the –cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you’d like to turn off curl’s verification of the certificate, use
the -k (or –insecure) option.
Great Article with very Unique content,& keep maintaining like this.
Thank you…Keep Sharing More new updates with us.
WorkDay Training