Docker Experimental Networking – 3

This blog is a continuation of my previous blog on Docker Experimental Networking. In this blog, I will walk through the example mentioned in this link where experimental Docker is integrated with Compose and Swarm. I have made some modifications here and I will cover this here.

I will create 2 applications in this example using Docker Compose.

  1. Counter container connecting to redis container running on 2 different hosts.
  2. WordPress container connecting to mysql container running on 2 different hosts.

I have used AWS instead of Digitalocean. First step is to create Consul machine and start Consul server.

docker-machine create --driver=amazonec2 --amazonec2-access-key=xxx --amazonec2-secret-key=xxx --amazonec2-vpc-id=vpc-5f77c23a --amazonec2-region=us-west-2 --engine-install-url "https://experimental.docker.com" consul

docker $(docker-machine config consul) run -d \
    -p "8500:8500" \
    -h "consul" \
    progrium/consul -server -bootstrap

Next, create 2 machines connecting both the nodes to Consul:

export SWARM_TOKEN=$(docker run swarm create)

docker-machine --debug create \
    -d amazonec2 --amazonec2-access-key=xxx --amazonec2-secret-key=xxx --amazonec2-vpc-id=vpc-5f77c23a --amazonec2-region=us-west-2 \
    --engine-install-url="https://experimental.docker.com" \
    --engine-opt="default-network=overlay:multihost" \
    --engine-opt="kv-store=consul:$(docker-machine ip consul):8500" \
    --engine-label="com.docker.network.driver.overlay.bind_interface=eth0" \
    swarm-0

docker-machine --debug create \
    -d amazonec2 --amazonec2-access-key=xxx --amazonec2-secret-key=xxx --amazonec2-vpc-id=vpc-5f77c23a --amazonec2-region=us-west-2 \
    --engine-install-url="https://experimental.docker.com" \
    --engine-opt="default-network=overlay:multihost" \
    --engine-opt="kv-store=consul:$(docker-machine ip consul):8500" \
    --engine-label="com.docker.network.driver.overlay.bind_interface=eth0" \
    --engine-label="com.docker.network.driver.overlay.neighbor_ip=$(docker-machine ip swarm-0)" \
    swarm-1

We need to update the kernel in the AWS Ubuntu swarm-0 and swarm-1 machines.

sudo apt-get install linux-generic-lts-vivid

After this, reboot both the nodes.

Next, we start Swarm on 1 of the nodes and make both nodes as part of Swarm cluster:

export SWARM_TOKEN=$(docker run swarm create)

docker $(docker-machine config swarm-0) run -d \
    --restart="always" \
    --net="bridge" \
    swarm:latest join \
        --addr "$(docker-machine ip swarm-0):2376" \
        "token://$SWARM_TOKEN"

docker $(docker-machine config swarm-0) run -d \
    --restart="always" \
    --net="bridge" \
    -p "3376:3376" \
    -v "/etc/docker:/etc/docker" \
    swarm:latest manage \
        --tlsverify \
        --tlscacert="/etc/docker/ca.pem" \
        --tlscert="/etc/docker/server.pem" \
        --tlskey="/etc/docker/server-key.pem" \
        -H "tcp://0.0.0.0:3376" \
        --strategy spread \
        "token://$SWARM_TOKEN"

docker $(docker-machine config swarm-1) run -d \
    --restart="always" \
    --net="bridge" \
    swarm:latest join \
        --addr "$(docker-machine ip swarm-1):2376" \
        "token://$SWARM_TOKEN"

Lets look at the containers running in swarm-0 and swarm-1 node. swarm-0 node runs the Swarm master.
swarm-0:

ubuntu@swarm-0:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                              NAMES
7bd46484ca81        swarm:latest        "/swarm manage --tls   19 seconds ago      Up 19 seconds       2375/tcp, 0.0.0.0:3376->3376/tcp   focused_banach      
a5618c5b8447        swarm:latest        "/swarm join --addr    24 seconds ago      Up 24 seconds       2375/tcp                           insane_swartz       

swarm-1:

ubuntu@swarm-1:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
930f724670fd        swarm:latest        "/swarm join --addr    16 seconds ago      Up 14 seconds       2375/tcp            grave_aryabhata     

Since swarm-0 is the master, we can point docker towards it for creating containers. Swarm manager will take care of scheduling containers across both hosts.
Lets set the docker environment in the host machine to point to swarm-0:

export DOCKER_HOST=tcp://"$(docker-machine ip swarm-0):3376"
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH="$HOME/.docker/machine/machines/swarm-0"

Looking at the docker info, we can get more info about the cluster. Below, we can see both nodes that are part of the cluster and also see the spread strategy as well as number of containers present in each node.

$ docker info
Containers: 3
Images: 2
Role: primary
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 2
 swarm-0: 52.27.220.94:2376
  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.018 GiB
  └ Labels: com.docker.network.driver.overlay.bind_interface=eth0, executiondriver=native-0.2, kernelversion=3.13.0-53-generic, operatingsystem=Ubuntu 14.04.2 LTS, provider=amazonec2, storagedriver=aufs
 swarm-1: 52.11.166.154:2376
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.018 GiB
  └ Labels: com.docker.network.driver.overlay.bind_interface=eth0, com.docker.network.driver.overlay.neighbor_ip=52.27.220.94, executiondriver=native-0.2, kernelversion=3.13.0-53-generic, operatingsystem=Ubuntu 14.04.2 LTS, provider=amazonec2, storagedriver=aufs
CPUs: 2
Total Memory: 2.035 GiB

I have uploaded “smakam/web” container as mentioned in the link above. I am using the following docker-compose.yml for the counter application.

web:
  image: smakam/counter
  ports:
   - "80:5000"
redis:
  image: redis

Lets run the compose application:

$ docker-compose up -d
Creating composetest_web_1...
Pulling redis (redis:latest)...
swarm-0: Pulling redis:latest... : downloaded
swarm-1: Pulling redis:latest... : downloaded
Creating composetest_redis_1...

Swarm scheduler scheduled redis container in swarm-0 and counter container in swarm-1.

ubuntu@swarm-0:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                              NAMES
52065658ceb5        redis               "/entrypoint.sh redi   10 seconds ago      Up 8 seconds                                           composetest_redis_1     
3621218bee69        swarm:latest        "/swarm manage --tls   16 minutes ago      Up 16 minutes       2375/tcp, 0.0.0.0:3376->3376/tcp   grave_heisenberg        
102131ef8b14        swarm:latest        "/swarm join --addr    16 minutes ago      Up 16 minutes       2375/tcp                           compassionate_perlman   
ubuntu@swarm-1:~$ sudo docker ps 
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
4e7976e09586        smakam/counter      "python app.py"        5 seconds ago       Up 4 seconds                            composetest_web_1   
9422191c67ca        swarm:latest        "/swarm join --addr    16 minutes ago      Up 16 minutes       2375/tcp            modest_bartik              

I was able to access the Web counter application at this point using AWS IP address from browser.

Next, I started the following WordPress application:

wordpress:
  image: wordpress
  ports:
   - "8080:80"
  environment:
    WORDPRESS_DB_HOST: "composeword_mysql_1:3306"
    WORDPRESS_DB_PASSWORD: mysql
mysql:
  image: mysql
  environment:
    MYSQL_ROOT_PASSWORD: mysql 

Swarm manager again scheduled the containers across the 2 swarm hosts. Web container got scheduled in swarm-1 and mysql container got scheduled in swarm-0.

ubuntu@swarm-0:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                              NAMES
830d996536d7        mysql               "/entrypoint.sh mysq   25 seconds ago      Up 24 seconds                                          composeword_mysql_1     
52065658ceb5        redis               "/entrypoint.sh redi   58 seconds ago      Up 57 seconds                                          composetest_redis_1     
3621218bee69        swarm:latest        "/swarm manage --tls   17 minutes ago      Up 17 minutes       2375/tcp, 0.0.0.0:3376->3376/tcp   grave_heisenberg        
102131ef8b14        swarm:latest        "/swarm join --addr    17 minutes ago      Up 17 minutes       2375/tcp                           compassionate_perlman   
ubuntu@swarm-1:~$ sudo docker ps 
CONTAINER ID        IMAGE               COMMAND                CREATED              STATUS              PORTS               NAMES
3078b2ab8f7e        wordpress           "/entrypoint.sh apac   31 seconds ago       Up 30 seconds                           composeword_wordpress_1   
4e7976e09586        smakam/counter      "python app.py"        About a minute ago   Up About a minute                       composetest_web_1         
9422191c67ca        swarm:latest        "/swarm join --addr    17 minutes ago       Up 17 minutes       2375/tcp            modest_bartik             

At this point, I was able to access the WordPress application using AWS ip address.

Now, lets look at the Services and Networks in both hosts:
swarm-0:

ubuntu@swarm-0:~$ sudo docker network ls
NETWORK ID          NAME                TYPE
823239bbedbb        bridge              bridge              
d27f06b5174d        multihost           overlay             
e8a6c93aaebd        none                null                
5d43482ee66f        host                host                
ubuntu@swarm-0:~$ sudo docker service ls
SERVICE ID          NAME                      NETWORK             CONTAINER
14653cce584f        composetest_redis_1       multihost           52065658ceb5
56094469541e        composeword_wordpress_1   multihost           3078b2ab8f7e
769e7260509c        composeword_mysql_1       multihost           830d996536d7
1602e1b6c38f        composetest_web_1         multihost           4e7976e09586
725f2b69c1c6        compassionate_perlman     bridge              102131ef8b14
9502762bac09        grave_heisenberg          bridge              3621218bee69
537df116e64f        composeword_wordpress_1   bridge              
40b08d3f582f        composeword_mysql_1       bridge              830d996536d7
0e8e64d93f45        composetest_redis_1       bridge              52065658ceb5

swarm-1:

ubuntu@swarm-1:~$ sudo docker network ls
NETWORK ID          NAME                TYPE
aa08afc70c99        none                null                
96e06afb22e6        host                host                
11cecec651b2        bridge              bridge              
d27f06b5174d        multihost           overlay             
ubuntu@swarm-1:~$ sudo docker service ls
SERVICE ID          NAME                      NETWORK             CONTAINER
1602e1b6c38f        composetest_web_1         multihost           4e7976e09586
14653cce584f        composetest_redis_1       multihost           52065658ceb5
56094469541e        composeword_wordpress_1   multihost           3078b2ab8f7e
769e7260509c        composeword_mysql_1       multihost           830d996536d7
aa98e2b5078d        modest_bartik             bridge              9422191c67ca
7291b7dfee8f        composetest_web_1         bridge              4e7976e09586
ad6c0d142859        composetest_redis_1       bridge              
ea12cd517972        composeword_wordpress_1   bridge              3078b2ab8f7e

As we can see, “wordpress” is present in both bridge and multihost network. Its present in bridge network to expose port to the host machine. Its present in overlay network to talk to mysql container. mysql container is present only in overlay network.

Here, we have used the Swarm “spread” strategy for scheduling containers. I found the scheduling to be not consistent in the sense that sometimes both containers got scheduled in the same host.

Before Docker Experimental network, we would need to use Container port linking to talk between Containers. Here, we have used Service discovery and be able to talk directly to services across hosts using service name itself. This is a major step from Docker Networking perspective.

Advertisements

3 thoughts on “Docker Experimental Networking – 3

  1. This is very good attempt to overcome current limitations of docker containers deployment in large setup. I have few more suggestions.

    1. Consider launching a “cluster” of docker containers based service where cluster containers can have same container app running but with different role. To determine the role, containers need to know some key containers in the cluster.

    Example : containers could be running in master or client mode. There could be multiple containers running in master mode for HA/failover. Client mode containers need to know which “master containers” they can use in the cluster i.e. they need to know IP addresses or “hostnames” of these master containers before starting. Therefore it should be possible to assign/start some containers with “Static IPs”.

    2. Ability to reserve vlans/IP ranges for specific containers types and dynamic IP allocation and release management process.

    3. Automatic Nating/Snating /Masquerading/Proxy configuration, if these containers need to communicate with external servers which are not part of containerized setup.

    4. Ability to integrate with Mesos/Marathon/Chronos like frameworks for easy deployment/management and autoscaling capabilities.

    5. Hostname / IP lookup capability for docker containers IPs.

    6. One host may have multiple “private vlans for docker containers” when a large capacity server hosts containers for multiple different types of containers for different users/services.

    1. Great points Ajay.

      I have seen mention of some of the points that you have mentioned in Docker roadmap.
      Item 1 is kind of similar to Kubernetes label and I think it falls to some extent in Orchestration area. Item 4 also falls under Orchestration category and Docker is working with Mesos.
      Item 2, 5 seems to be in Networking roadmap. Item 3, 6 not sure.

      Worth mentioning this in Docker experimental networking feedback section.

      Sreenivas

      1. Thanks Sreeni. Looking forward to see Docker Networking as complete single point solution though orchestration may allow integration but with added efforts.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s