One of the big features in Docker 1.12 release is Swarm mode. Docker had Swarm available for Container orchestration from 1.6 release. Docker released Swarmkit as an opensource project for orchestrating distributed systems few weeks before Docker 1.12(RC) release. I had some confusion between these three projects. In this blog, I have tried to put my perspective on the similarities and differences between these three software components. I have also created a sample application and deployed it using the three approaches which makes it easier to compare.
Docker Swarm mode is fundamentally different from Swarm and it is confusing to use the same name Swarm. It would have been good if Docker could have renamed this to something different. Another point adding to the confusion is that native Swarm functionality will continue to be supported in Docker 1.12 release, this is done to preserve backward compatibility. In this blog, I have used the term “Swarm” to refer to traditional Swarm functionality, “SwarmNext” to refer to new Swarm mode added in 1.12, “Swarmkit” to refer to the plumbing open source orchestration project.
Swarm, SwarmNext and Swarmkit
Following table compares Swarm and SwarmNext:
Swarm | SwarmNext |
Separate from Docker Engine and can run as Container | Integrated inside Docker engine |
Needs external KV store like Consul, etcd | No need of separate external KV store |
Service model not available | Service model is available. This provides features like scaling, rolling update, service discovery, load balancing and routing mesh |
Communication not secure | Both control and data plane is secure |
Integrated with machine and compose | Not yet integrated with machine and compose as of release 1.12. Will be integrated in the upcoming releases |
Following table compares Swarmkit and SwarmNext:
Swarmkit | SwarmNext |
Plumbing opensource project | Swarmkit used within SwarmNext and tightly integrated with Docker Engine |
Swarmkit needs to built and run separately | Docker 1.12 comes integrated with SwarmNext |
No service discovery, load balancing and routing mesh | Service discovery, load balancing and routing mesh available |
Use swarmctl CLI | Use regular Docker CLI |
Swarmkit has primitives to handle orchestration features like node management, discovery, security and scheduling.
Sample application:
Following is a very simple application where there is a highly available voting web server that can be accessed from a client. The client’s request will get load balanced between the available web servers. This application will be created in an custom overlay network. We will deploy this application using Swarm, SwarmNext and Swarmkit.
Pre-requisites:
- I have used docker-machine version 0.8.0-rc1 and Docker engine version 1.12.0-rc3.
- “smakam/myubuntu” Container is regular Ubuntu plus some additional utilities like curl to illustrate load balancing.
Deployment using Swarm:
Following are the summary of the steps:
- Create KV store. In this example, I have used Consul.
- Create Docker instances using the created KV store. In this example, I have created Docker instances using Docker machine.
- Create a Overlay network.
- Create multiple instances of voting web server and a single instance of client. All web servers need to share same net alias so that the request from client can get load balanced between the web servers.
Create KV store:
docker-machine create -d virtualbox mh-keystore eval "$(docker-machine env mh-keystore)" docker run -d \ -p "8500:8500" \ -h "consul" \ progrium/consul -server -bootstrap
Create 2 Docker Swarm instances pointing to KV store:
docker-machine create \ -d virtualbox \ --swarm --swarm-master \ --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" \ mhs-demo0 docker-machine create -d virtualbox \ --swarm \ --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" \ mhs-demo1
Create overlay network:
eval $(docker-machine env --swarm mhs-demo0) docker network create --driver overlay overlay1
Create the services:
Both instances of voting container has the same alias “vote” so that they can be accessed as single service.
docker run -d --name=vote1 --net=overlay1 --net-alias=vote instavote/vote docker run -d --name=vote2 --net=overlay1 --net-alias=vote instavote/vote docker run -ti --name client --net=overlay1 smakam/myubuntu:v4 bash
Lets connect to the voting web server from client container:
root@abb7ec6c67fc:/# curl vote | grep "container ID" Processed by container ID a9c05cd4ee15 root@abb7ec6c67fc:/# curl -i vote | grep "container ID" Processed by container ID ce94f38fc958
As we can see from above output, the ping to “vote” service gets load balanced between “vote1” and “vote2” with each one having different Container ID.
Deploying using SwarmNext:
Following are the summary of steps:
- Create 2 Docker instances using Docker machine with 1.12 RC3 Docker image. Start 1 node as master and another as worker.
- Create a Overlay network.
- Create voting web service with 2 replicas and client service with 1 replica in the overlay network created above.
Create 2 Docker instances:
docker-machine create -d virtualbox node1 docker-machine create -d virtualbox node2
Setup node1 as master:
docker swarm init --listen-addr 192.168.99.100:2377
Node1 will also serve as worker in addition to being master.
Setup node2 as worker:
docker swarm join 192.168.99.100:2377
Lets look at running nodes:
$ docker node ls ID HOSTNAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS b7jhf7zddv2w2evze1bz44ukx * node1 Accepted Ready Active Leader ca4jgzcnyz70ry4h5enh701fv node2 Accepted Ready Active
Create overlay network:
docker network create --driver overlay overlay1
Create services:
docker service create --replicas 1 --name client --network overlay1 smakam/myubuntu:v4 ping docker.com docker service create --name vote --network overlay1 --replicas 2 -p 8080:80 instavote/vote
For this example, it is not needed to expose port to the host, I have used it anyway. Port 8080 gets exposed on both “node1” and “node2” using the routing mesh feature in Docker 1.12.
Lets look at running services:
$ docker service ls ID NAME REPLICAS IMAGE COMMAND 2rm1svgfxzzw client 1/1 smakam/myubuntu:v4 ping docker.com af6lg0cq66bl vote 2/2 instavote/vote
Lets connect to the voting web server from client container:
# curl vote | grep "container ID" Processed by container ID c831f88b217f # curl vote | grep "container ID" Processed by container ID fe4cc375291b
From above output, we can see the load balancing happening from client to the 2 web server containers.
Deploying using Swarmkit:
Following are the steps:
- Create Docker machine 2 node cluster. I was able to create Swarm cluster and use it without KV store. For some reason, Overlay network did not work without KV store. So, I had to use KV store for this example.
- Build Swarmkit and export the binaries to Swarm nodes.
- Create Swarm cluster with 2 nodes.
- Create Overlay network and create services in the overlay network.
Building Swarmkit:
Here, Swarmkit is built inside a GO Container.
git clone https://github.com/docker/swarmkit.git eval $(docker-machine env swarm-01) docker run -it --name swarmkitbuilder -v `pwd`/swarmkit:/go/src/github.com/docker/swarmkit golang:1.6 bash cd /go/src/github.com/docker/swarmkit make binaries
Create Docker instances with KV store:
docker-machine create \ -d virtualbox \ --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" \ swarm-01 docker-machine create -d virtualbox \ --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" \ swarm-02
Export Swarmkit binaries to the nodes:
docker-machine scp bin/swarmd swarm-01:/tmp docker-machine scp bin/swarmctl swarm-01:/tmp docker-machine ssh swarm-01 sudo cp /tmp/swarmd /tmp/swarmctl /usr/local/bin/ docker-machine scp bin/swarmd swarm-02:/tmp docker-machine scp bin/swarmctl swarm-02:/tmp docker-machine ssh swarm-02 sudo cp /tmp/swarmd /tmp/swarmctl /usr/local/bin/
Create Swarm cluster:
Master: docker-machine ssh swarm-01 swarmd -d /tmp/swarm-01 \ --listen-control-api /tmp/swarm-01/swarm.sock \ --listen-remote-api 192.168.99.101:4242 \ --hostname swarm-01 & Worker: swarmd -d /tmp/swarm-02 \ --hostname swarm-02 \ --listen-remote-api 192.168.99.102:4242 \ --join-addr 192.168.99.101:4242 &
Create overlay network and services:
swarmctl network create --driver overlay --name overlay1 swarmctl service create --name vote --network overlay1 --replicas 2 --image instavote/vote swarmctl service create --name client --network overlay1 --image smakam/myubuntu:v4 --command ping,docker.com
Following command shows the 2 node cluster:
export SWARM_SOCKET=/tmp/swarm-01/swarm.sock swarmctl node ls ID Name Membership Status Availability Manager Status -- ---- ---------- ------ ------------ -------------- 5uh132h0acqebetsom1z1nntm swarm-01 ACCEPTED READY ACTIVE REACHABLE * 5z8z6gq36maryzrsy0cmk7f51 ACCEPTED UNKNOWN ACTIVE
Following command shows successful connect from client to voting web server:
# curl 10.0.0.3 | grep "container ID" Processed by container ID 78a3e9b06b7f # curl 10.0.0.4 | grep "container ID" Processed by container ID 04e02b1731a0
In the above output, we have pinged by Container IP address since Service discovery and load balancing is not integrated with Swarmkit.
Issues:
I have raised an issue on the need for using KV store with Overlay network in Swarmkit. This looks like a bug to me or I might be missing some option.
Summary
SwarmNext(Swarm mode) is a huge improvement over previous Docker Swarm. Having the Services object in Docker makes it easier to do features like scaling, rolling update, service discovery, load balancing and routing mesh. This also makes Swarm to catch up on some of the Kubernetes like features. Docker has supported both SwarmNext and Swarm in release 1.12 so that production users who have deployed Swarm wont get affected as part of upgrade. SwarmNext does not have all functionalities at this point including integration with Compose and storage plugins. This will get added to SwarmNext soon. In the long run, I feel that Swarm would get deprecated and SwarmNext would become the only mode for orchestration in Swarm. Having Swarmkit as an opensource project allows independent development of Swarmkit and anyone developing a orchestration system for distributed application can use this as a standalone module.
References
- SwarmNext documentation
- Swarm documentation
- Swarmkit github page
- Swarmkit blog – 1, 2
3 thoughts on “Comparing Swarm, Swarmkit and Swarm Mode”