Docker-machine for AWS

In this blog, I will use docker-machine to deploy and manage Containers in AWS cloud. This blog is part of my Docker for AWS series and uses the sample voting application for illustration.

Docker-machine has EC2 driver for creating a Docker node out of AWS. Docker node in this context means a AWS VM instance with Docker pre-installed. With default options, docker-machine picks up T2-micro EC2 node with Ubuntu 15.10 OS and installs latest Docker engine on the AWS instance. docker-machine also takes care of configuring appropriate certificates which allows us to access the AWS VM securely.

If we have to do the steps manually, we first need to create EC2 instance with appropriate OS, install Docker, manage ssh keys etc. docker-machine takes care of automating this whole process.

Following are the summary of steps used here:

  • Create Docker nodes in AWS. For this application, I have created 2 Docker nodes.
  • Create appropriate security groups to expose ports that are needed for communication between nodes in the cluster as well as ports that are exposed to outside world.
  • Create swarm cluster by starting swarm master and slave in the individual nodes.
  • Deploy voting application.
  • Create AWS ELB to load balance the external web request to voting service.
  • Access the service from internal “client” service and externally through ELB.

Create Docker VM instances

First step is to put in AWS credentials. This can either be specified from commandline or put in an environment file.
We can put credentials in “~/.aws/credentials” file in the following format.
<pre< [default] aws_access_key_id = MY-KEY aws_secret_access_key = MY-SECRET-KEY Following set of command can be used to create 2 Docker nodes:

docker-machine create --driver amazonec2 aws01
docker-machine create --driver amazonec2 aws02

Create Security groups

Before we start using the instances, we need to create a security group to open up some ports that are used by Docker for control traffic.

  • TCP port 2377 is needed for Swarm mode.
  • TCP port 2376 is opened up by default for Docker api.
  • UDP port 4789 is needed for vxlan overlay traffic to pass through.
  • Based on the ports externally exposed by the service, we need to open up those ports as well.

Create Swarm cluster

Create Swarm mode cluster by making “aws01” node as master and “aws02” node as worker. We can use “docker swarm init” in master node and “docker swarm join” in worker nodes.

Following output shows the 2 node AWS Docker cluster running in Swarm mode.

ubuntu@aws01:~$ sudo docker node ls
1nbr6mylj55uvzoon5oglbcu5    aws02     Ready   Active        
ej5db7uzvaz9zaf8kcgjy5lfd *  aws01     Ready   Active        Leader

Following command shows the running Docker version:

$ docker --version
Docker version 1.12.1, build 23cf638

Deploy application

Following commands creates the overlay network and starts up “client” and “vote” service.

eval "$(docker-machine env aws01)"
sudo docker network create --driver overlay overlay1
sudo docker service create --replicas 1 --name client --network overlay1 smakam/myubuntu:v4 ping
sudo docker service create --name vote --network overlay1 --replicas 2 -p 8080:80 instavote/vote

The first command sets environment variable to point to master node on which we create the overlay network and the services.
Following command shows the 2 AWS VMs running Docker:

$ aws ec2 --region us-east-1 describe-instances | grep -i instanceid
                    "InstanceId": "i-49ec41d0", 
                    "InstanceId": "i-6518bafc", 

Following command shows the 2 running services:

$ sudo docker service ls
ID            NAME    REPLICAS  IMAGE               COMMAND
5dk0u5gosxjh  client  1/1       smakam/myubuntu:v4  ping
awd79ol658be  vote    2/2       instavote/vote  

Create ELB

Next, we will create a ELB to load balance traffic between the 2 EC2 instances. ELB will map traffic coming to port 80 to port 8080 where the voting application is exposed in the host node.
Following ELB output shows the port mapping:

$ aws elb --region us-east-1 describe-load-balancers | grep -i -A 6 "listener"
            "ListenerDescriptions": [
                    "Listener": {
                        "InstancePort": 8080, 
                        "LoadBalancerPort": 80, 
                        "Protocol": "TCP", 
                        "InstanceProtocol": "TCP"

Access voting service through client service
Following command shows exec to “client” container:

$ sudo docker ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS               NAMES
0903528aea9a        smakam/myubuntu:v4   "ping"   2 minutes ago       Up 2 minutes                            client.1.0j8k3xie8rpxbmdhl6wcbzc2v
ubuntu@aws01:~$ sudo docker exec -ti 0903528aea9a bash

Following command shows that access to “vote” application from “client” container getting load balanced:

# curl -c1 vote | grep -i "container id"
          Processed by container ID fe76b1533811
root@0903528aea9a:/# curl -c1 vote | grep -i "container id"
          Processed by container ID 2eb14b91a3c0

Access voting service through ELB

Following output shows the DNS domain name and also shows that the request to ELB DNS name getting load balanced between the 2 backend containers.

$ aws elb --region us-east-1 describe-load-balancers | grep DNSName
            "DNSName": "", 
sreeni@ubuntu:~$ curl | grep -i "container id"
          Processed by container ID 685512db730f
sreeni@ubuntu:~$ curl | grep -i "container id"
          Processed by container ID b0080c9e9a68



One thought on “Docker-machine for AWS

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s