Category Archives: Docker

Hybrid cloud recent solutions from Microsoft and VMWare – 2 different ends of the hybrid cloud spectrum

Public clouds have grown tremendously over the last few years and there are very few companies who do not use public cloud at this point. Even traditional enterprises with in-house data centers have some presence in the public cloud. I was looking at Amazon’s re:Invent conference details and I was amazed by the number of new services and enhancements that were announced this year.  It is very difficult for private clouds to keep up in pace with the new features of public cloud. There is no doubt that public clouds will overtake private clouds in the long term. Private clouds still have a wide deployment and there will be enough use cases for quite some time to deploy private cloud. The use cases includes regulated industries, compute needed in remote locations not having access to public cloud and some specialized requirements that public clouds cannot meet. For some enterprises, private cloud would make more sense from a costing perspective. Having hybrid cloud option is a safe bet for most companies as it provides the best of both worlds. I saw 2 recent announcements in hybrid cloud that captured my attention. One is Azure stack that allows running Azure stack in private cloud. Another is VMWare cloud on AWS that allows running entire VMware stack in AWS public cloud. I see these two services as 2 ends of the hybrid cloud spectrum. In 1 case, public cloud infrastructure software is made to run on private cloud(Azure stack) and in another case, private cloud infrastructure software is made to run on public cloud(Vmware cloud on AWS). In this blog, I have tried to capture more details on these 2 services.

Hybrid cloud

There are predominantly 2 options currently to run Private cloud. 1 option is to use vendor based cloud management software along with hardware from same vendor. Cisco UCS is an example in this category where customers get to use UCS servers integrated with networking and storage along with management software from Cisco. This provides a tightly integrated solution. Another option is to use Openstack as a cloud orchestration system and use server, networking and storage hardware from any vendor. Both these kinds of solutions works well in private cloud. For enterprises having private cloud, there always are use-cases where few services makes more sense in public cloud. A classic example is a development team using public cloud for 1 of their development projects for agility reasons. Once the development is complete, operations team has a choice to deploy the application in either private or public cloud. There is also the use case where current applications deployed in private cloud needs to be scaled by using public cloud for elasticity reasons. In either of the cases, we need a solution that allows easier migration of applications along with their policies between the private and public cloud.

Following are some important requirements of hybrid cloud:

  • Having a common management software to manage public and private clouds.
  • Ability to move applications seamlessly between the clouds.
  • Secure connectivity between the clouds.

Microsoft Azure stack

Azure stack is a hybrid cloud platform product from Microsoft that allows managing the private cloud with the same Azure public cloud software stack.

Following picture from Azure stack link shows the components of Azure stack:

privatecloud1

Following are some details on the solution:

  • Azure stack takes some of the components of Microsoft Azure public cloud to manage private cloud. To start with, Azure stack will support limited services in private cloud when compared to Azure public cloud.
  • Cloud infrastructure layer is hardware and basic system software for running compute, storage and networking. In the initial release, Azure stack will be provided as turnkey integrated solution with hardware integrated from Dell, HP and Lenovo. It looks like more vendors will be added in future. The reason to support limited vendors is to achieve tight integration and simplify the deployment solution.
  • Azure infrastruture layer sits on top of cloud infrastructure and the Azure services layer interacts with Azure infrastructure layer.
  • The first technical preview release was done in early 2016 and the second technical preview release was done in late 2016. GA release is planned middle of 2017.
  • The entire Azure stack runs currently on a single node. The plan is to make this distributed in future.

Following are some of my general thoughts on Azure stack:

  • Public cloud providers typically did not focus on private clouds since that would eat into their pie. This is a good move by Microsoft to facilitate hybrid cloud and the gradual move to public cloud.
  • The pricing and licensing model of Azure stack is not clear. Since the plan is to have a turnkey integrated solution with few vendors, there has to be some form of licensing agreement with multiple parties.
  • It is not clear how the OEM vendors providing cloud infrastructure can differentiate their solutions.
  • Having a restricted cloud infrastructure vendor list will make this solution not useful for private clouds using legacy hardware. It will be good if the cloud stack can provide common API that can allow any hardware vendor that supports the API to be managed by the Azure stack cloud software. To some extent, Openstack is following this model. It will be good if Azure stack can do the same so that there is no restriction on the vendor list.
  • AWS and Google cloud have not introduced private cloud management solutions till now. As mentioned earlier, there are use cases where access to public cloud is not possible and private cloud would be a better fit. AWS greengrass IoT solution is the closest private cloud like solution from AWS that I have seen where local IoT resources are used for compute when needed.

VMWare cloud on AWS

This solution allows the entire VMWare virtualization stack(including compute, storage and networking) to run in AWS public cloud. The solution is provided by VMWare and is a joint collaboration between VMWare and Amazon AWS. For enterprises using VMWare stack to manage their private cloud infrastructure, they can use the same software stack when moving some of their services to AWS public cloud.

Following picture from VMWare link shows the components of this solution:

privatecloud2

Following are some details on the solution:

  • All the core components of VMWare stack including Vsphere, Virtual SAN, NSX, ESX and Vcenter runs in AWS infrastructure.
  • AWS typically uses Xen hypervisor for virtualization and VMWare uses ESX for virtualization. To do this integrated solution, ESX runs in AWS baremetal. There is no Xen hypervisor in this integrated solution.
  • Vcenter is used for management across on-premise as well as in AWS. In 1 of the joint demos, VMWare shows seamless VM migration between on-premise cloud and AWS cloud.
  • VMs deployed in AWS public cloud can use all the AWS services like storage, database, analytics etc. This makes this solution very attractive.
  • This service will be operated and managed by VMWare. Both AWS and VMWare have made changes to their stack for this integrated solution.
  • The solution is currently in Technical preview phase and general availability is expected middle of 2017.

Following are some of my general thoughts on this VMware cloud on AWS solution:

  • VMWare has tried different strategies to get a foothold into public and hybrid cloud. vCloud hybrid service was 1 of their unsuccessful attempts earlier on this. This solution will benefit both VMWare and AWS, the bigger benefit lies for AWS.
  • AWS has not sold baremetal servers till now. There are companies like Packet that provides baremetal servers. There are use cases for baremetal like non-virtualized scenarios or pure container based solutions where baremetal servers would help. It will be interesting to see if AWS would sell these baremetal servers in future. It is not clear why AWS has not provided bare metal servers till now, 1 possible reason could be that it would take away some of its differentiators.
  • Microsoft has a private cloud enterprise solution with hyperv and public cloud solution with Azure public stack. Microsoft can provide a similar integrated solution that allows Microsoft’s private cloud stack to run on its Azure public cloud. It is not sure if Microsoft will venture into this.

Summary

Both solutions described above are good hybrid cloud solutions that eases movement to public cloud. Both these hybrid cloud solutions are favorable more for public cloud rather than private cloud. Even though these solutions helps private clouds temporarily, long term benefits lies with public cloud. It will be good to have cloud management software that is cloud agnostic so that multiple cloud vendors can be used and there is no vendor lock-in. Terraform and Cliqr are some solutions catered to this space.

References

Docker in Docker and play-with-docker

For folks who want to get started with Docker, there is the initial hurdle of installing Docker. Even though Docker has made it extremely simple to install Docker on different OS like Linux, Windows and Mac, the installation step prevents folks from getting started with Docker. With Play with Docker, that problem also goes away. Play with Docker provides a web based interface to create multiple Docker hosts and be able to run Containers. This project is started by Docker captain Marcos Nils and is an open source project. Users can run regular containers or build Swarm cluster between the Docker hosts and create container services on the Swarm cluster. The application can also be installed in the local machine. This project got me interested in trying to understand the internals of the Docker host used within the application. I understood that Docker hosts are implemented as Docker in Docker(Dind) containers. In this blog, I have tried to cover some details on Dind and Play with Docker.

Docker in Docker(Dind)

Docker in Docker(Dind) allows Docker engine to run as a Container inside Docker. This link is the official repository for Dind. When there is a new Docker version released, corresponding Dind version also gets released. This link from Jerome is an excellent reference on Docker in Docker that explains issues with Dind, cases where Dind can be used and cases where Dind should not be used.

Following are the two primary scenarios where Dind can be needed:

  1. Folks developing and testing Docker need Docker as a Container for faster turnaround time.
  2. Ability to create multiple Docker hosts with less overhead. “Play with Docker” falls in this scenario.

Following picture illustrates how Containers running in Dind are related to Containers running in host machine.

dind1

Dind, C1 and C2 are containers running in the host machine. Dind is a Docker container hosting Docker machine. C3 and C4 are containers running inside Dind Container.

Following example illustrates Dind:

I have Docker 1.13RC version as shown below:

$ docker version
Client:
 Version:      1.13.0-rc2
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   1f9b3ef
 Built:        Wed Nov 23 06:24:33 2016
 OS/Arch:      linux/amd64

Server:
 Version:             1.13.0-rc2
 API version:         1.25
 Minimum API version: 1.12
 Go version:          go1.7.3
 Git commit:          1f9b3ef
 Built:               Wed Nov 23 06:24:33 2016
 OS/Arch:             linux/amd64
 Experimental:        false

Lets start Dind Container. Its needed to run this in privileged mode since its mounts system files from host system.

docker run --privileged --name dind1 -d docker:1.8-dind

We can look at Docker version inside Dind:

# docker version
Client:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 18:01:15 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 18:01:15 UTC 2015
 OS/Arch:      linux/amd64

Even though host machine is running Docker 1.13RC version, we can test Docker 1.8.3 inside the Container using above example.

For Continuous integration(CI) use cases, it is needed to build Containers from CI system. In case of Jenkins, it is needed to build Docker containers from Jenkins master or Jenkins slave. Jenkins master or slave run as Container themselves. For this scenario, it is not needed to have Docker engine running within Jenkins Container. It is needed to have Docker client in Jenkins container and use Docker engine from host machine. This can be achieved by mounting “/var/run/docker.sock” from host machine.

Following diagram illustrates this use-case:

dind2

Jenkins runs as a Container. C1 and C2 are containers started from host machine. C3 and C4 are Docker containers started from Docker client inside Jenkins. Since Docker engine on host is shared by Jenkins, C3 and C4 are created on same host and share same hierarchy as C1 and C2.

Following is an example of Jenkins Container that mounts /var/run/docker.sock from host machine.

docker run --rm --user root --name myjenkins -v /var/run/docker.sock:/var/run/docker.sock -p 8080:8080 -p 50000:50000 jenkins

Following command shows the Docker version inside Jenkins container:

# docker version
Client:
 Version:      1.13.0-rc4
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   88862e7
 Built:        Sat Dec 17 01:34:17 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.0-rc2
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   1f9b3ef
 Built:        Wed Nov 23 06:24:33 2016
 OS/Arch:      linux/amd64
 Experimental: false

1.13RC4 is Docker client version installed inside Jenkins and 1.13RC2 is Docker server version installed in the Docker host.

“Play with Docker”

The application is hosted in public cloud and can be accessed as SaaS service using the following link. The application can also be run in the local machine. Following are some capabilities that I have tried:

  • Run traditional non-service based containers.
  • Create Swarm mode cluster and run services in the Swarm cluster.
  • Exposed ports in the services can either be accessed from localhost or can be accessed externally by tunneling with ngrok.
  • Create bridge and overlay networks.

Following is a screenshot of the application hosted in public cloud where I have created a 5 node Swarm cluster with 2 masters and 3 slaves.

dind3.PNG

To create a 5 node cluster, the typical approach would be to use 5 different hosts or VMs which is a huge burden on resources. Using
Play with Docker”, we create 5 node cluster with 5 Dind containers. For non-production testing scenarios, this saves lot of resources.

Following are some limitations in the SaaS version:

  • There is a limit of 5 nodes.
  • Sessions are active only for 4 hours.
  • The usual Dind limitations applies here.

Lets start a simple web server with 2 replicas:

docker service create --replicas 2 --name web -p 8080:80 nginx

Following output shows the service running:

$ docker service ps web
ID            NAME   IMAGE         NODE   DESIRED STATE  CURRENT STATE           ERROR  PORTS
dmflqoe67pr1  web.1  nginx:latest  node3  Running        Running 56 seconds ago
md47jcisfbeb  web.2  nginx:latest  node4  Running        Running 57 seconds ago

The service can either be accessed from Dind host using curl or by tunneling the application using ngrok and accessing using internet.
Following is an example of exposing the service to outside world using ngrok:

docker run --net host -ti jpetazzo/ngrok http 10.0.15.3:8080

This will return an URL which can be accessed from internet to access the nginx service that we started earlier.

“Play with Docker” can also be installed in local machine. The advantage here is that we can tweak the application according to our need. For example, we can install custom Docker version, increase the number of Docker hosts, keep the sessions always up etc.

Following are some internals on the application:

  • Base machine needs to have Docker 1.13RC2 running.
  • The application is written in GO and is run as a Container.
  • Dind Containers are not the official Docker Dind container. “franela/Dind” is used.
  • GO container that runs the main GO application does a volume mount of “/var/run/docker.sock”. This allows Dind Containers to run in the base machine.

Following picture shows the container hierarchy for this application.

dind4

“Golang” container is in the same hierarchy as Dind Containers. Dind containers simulate Docker hosts here. C1-C4 are user created containers created on the 2 Docker hosts.

To install “Play with Docker” in my localhost, I followed the steps below:

Installed docker 1.13.0-rc2
git clone https://github.com/franela/play-with-docker.git
installed go1.7
docker swarm init
docker pull franela/dind
cd play-with-docker
go get -v -d -t ./...
export GOPATH=~/play-with-docker
docker-compose up

My localhost is Ubuntu 14.04 VM running inside Windows machine.

Following is the 2 node Swarm cluster I created:

$ docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
p6koe4bo8rn7hz3s4y7eddqwz *  node1     Ready   Active        Leader
yuk6u9r3o6o0nblqsiqjutoa0    node2     Ready   Active

Following are some problems I faced with local installation:

  • When I started docker-compose, the application crashed once in a while. I was able to work around this problem by restarting docker-compose.
  • For swarm mode services, I was not able to access exposed service using host port number. For regular containers, I was able to access exposed host port.

I did not face the above 2 problems when I accessed the application as SaaS.

Thanks to Marcos Nils for helping me with few issues faced during my local installation.

References

Docker for AWS – Deployment options

In this blog, I will cover 5 different options to deploy Docker Containers in AWS infrastructure. There are pros and cons of each option and the goal in this blog is not to suggest that some options are better than others, but to highlight the suitable option for a particular use case. I have taken a sample multi-container application and deployed in all the 5 different models to illustrate this. Following are the 5 options/models discussed in this blog:

  1. Docker Machine for AWS
  2. Docker for AWS
  3. Docker cloud for AWS
  4. Docker Datacenter for AWS
  5. AWS ECS

I have separate blog for each of the above deployment options which are linked to this blog.

Sample application

Following is the sample application used in this blog:

docker_aws10

“client” service has 1 client container task. “vote” service has multiple vote container tasks. Both these services are deployed on a multi-node cluster. “client” service is used to access multi-container “vote” service. “vote” service can also be accessed through external load balancer. The goal of the sample application is to illustrate multi-node cluster, multi-container application, orchestration, container networking across hosts, external load balancing, service discovery and internal load balancing.

Docker-machine for AWS

Docker-machine has EC2 driver for creating a Docker node out of AWS. Docker node in this context means a AWS VM instance with Docker pre-installed. Docker-machine also sets up secure ssh access to the EC2 instance. Once the basic node setup is done, the user can either use traditional Swarm or Swarm mode for orchestration. In terms of integration, this approach provides minimal integration with AWS. This option is very easy to start with and useful for developers who want to try out Docker Containers in the AWS cloud. For more details on Docker-machine for AWS, please refer here.

Docker for AWS

As part of Docker 1.12 announcement, Docker released AWS Docker integration as beta software. With this software, Docker is trying to  simplify AWS integration by better integrating Docker with AWS services like load balancer, security groups, cloudwatch etc. Compared to docker-machine, this option provides close integration with AWS services. System containers running in the EC2 instances provides tight integration between user containers and AWS services. These system containers are added by Docker. For example, 1 of the system container listens to host exposed ports and automatically adds it to the AWS ELB. Currently, there are limited options to change the configuration setup. Hopefully, this will be improved when this comes out of beta phase. This option is useful for developers and operations folks who are used to both Docker tools as well as AWS services.  For more details on Docker for AWS, please refer here.

Docker Cloud for AWS

Docker cloud is a paid hosted service from Docker to manage Containers. Docker cloud can be used to manage nodes in the cloud or in local data center. By providing AWS credentials, Docker cloud can create and manage AWS EC2 instances and Docker containers will be created on these EC2 instances. Since Docker cloud was an acquisition, it does not use some of the Docker ecosystem software. In terms of integration with AWS, Docker cloud provides minimal integration at this point. Docker cloud provides a lot of value in terms of simplifying infrastructure management and deployment of complex micro-services. This option is useful for folks who want a simple hosted solution with minimal integration around AWS services. For more details on Docker cloud for AWS, please refer here.

Docker Datacenter for AWS

Docker Datacenter is Docker’s enterprise grade CaaS(Container as a service) solution where they have integrated their open source software with some proprietary software and support to make it into a commercial product. Docker Datacenter is an application comprised of Universal control plane(UCP), Docker Trusted registry(DTR), Docker engine and supporting services running as Containers. Docker Datacenter for AWS means running these system services on AWS EC2 instances along with running the application containers which the system services manages. Docker Datacenter is an enterprise grade solution with multi-tenancy support and it provides nice integration with Light weight directory access protocol(LDAP) and Role based access control(RBAC). Docker Datacenter for AWS provides a secure solution with clear separation between private and public subnet. Docker Datacenter also provides high availability with multiple UCP controllers and DTR replicas. This option is useful for Enterprises who want a production grade Docker deployment with tight integration around AWS services. For more details on Docker Datacenter for AWS, please refer here.

AWS ECS

AWS has EC2 Container service(ECS) for folks who want to deploy Docker containers in AWS infrastructure. With ECS, Amazon provides its own scheduler to manage Docker containers. ECS integrates very well with other AWS services including load balancer, cloudwatch, cloudformation templates etc. The workflow is little different for folks used to Docker tools. For folks who want to use the Docker ecosystem tools, this option is not suitable.This option can be very powerful once ECS integrates with all AWS services, it can allow seamless movement between VMs and Containers.  The task and service definition file formats does not seem flexible.  The good thing with ECS is users are not charged for Containers or for ECS, but charged only for the EC2 instances. This option seems more suitable for folks who have been using AWS for a long time and want to try out Docker containers. For more details on AWS ECS, please refer here.

Following table is a brief comparison between the 5 solutions:

Property/Solution Docker Machine for AWS Docker for AWS Docker Cloud for AWS Docker Datacenter for AWS AWS ECS
Docker version Latest Docker version(1.12.1 in my case), no flexibility to select Docker version Latest Docker version(1.12 in my case), no flexibility to select Docker version Uses 1.11, no flexibility to select Docker version Uses 1.11, no flexibility to select Docker version Uses 1.11, no flexibility to select Docker version
Orchestration Traditional Swarm using external discovery or Swarm mode can be used. Needs to be setup manually. Swarm mode is integrated and available automatically. Uses proprietary scheduler. Traditional Swarm is used. KV store is automatically setup. Uses AWS proprietary scheduler. There is a plan to integrate external schedulers.
Networking Docker Libnetwork Docker Libnetwork Uses Weave. Docker Libnetwork AWS VPC based networking
Application definition Compose and DAB Compose and DAB Stackfile Compose Task and Service definition files
Integration with AWS Services Very minimal integration Good integration. VPC, ELB, Security groups, IAM roles gets automatically setup. Minimal integration. Good integration. Availability zones, VPC, ELB, Security groups, IAM roles gets automatically setup. Very good integration. Integration available with classic or application load balancer, Cloudwatch logs, autoscaling groups.
Cost (This is in addition to EC2 instance cost) Free Beta phase currently, not sure of the cost. 1 node and 1 private repository free, charges applicable after that. Paid service, free for 30 day trial period Free

Following are some things that I would like to see:

  • AWS ECS allowing an option to use Swarm scheduler.
  • Docker for AWS, Docker cloud and Docker Datacenter using a common networking and orchestration solution.
  • It will be good to have a common task definition format for applications or an option to automatically convert between the formats internally. This allows for users to easily move between these options and use the same task definition format.

References

AWS ECS – Docker Container service

In this blog, I will cover AWS ECS Docker Container service. ECS is an AWS product. This blog is part of my Docker for AWS series and uses the sample voting application for illustration.

AWS has EC2 Container service(ECS) for folks who want to deploy Docker containers in AWS infrastructure. For basics of AWS ECS, you can refer to my previous blog here. With ECS, Amazon provides its own scheduler to manage Docker containers. ECS integrates very well with other AWS services including load balancer, logging service, cloudformation templates etc. AWS recently introduced Application load balancer(ALB) that does L7 load balancing and this integrates well with ECS. Using ALB, we can load balance services directly across Containers. With ECS, users get charged for the EC2 instances and not for the Containers.

To demonstrate ECS usage, we will deploy voting service application in ECS cluster.

Continue reading AWS ECS – Docker Container service

Docker Datacenter for AWS

In this blog, I will cover Docker datacenter usage with AWS. This blog is part of my Docker for AWS series and uses the sample voting application for illustration.

Docker Datacenter is Docker’s enterprise grade CaaS(Container as a service) solution where they have integrated their open source software with some proprietary software and support to make it into a commercial product. Docker Datacenter can be deployed on-premise or in cloud providers like AWS. Docker Datacenter is available free for 30 day trial period.

Docker Datacenter Architecture

Following picture shows the core components of Docker Datacenter:

Continue reading Docker Datacenter for AWS

Docker Cloud for AWS

In this blog, I will cover Docker cloud usage with AWS. This blog is part of my Docker for AWS series and uses the sample voting application for illustration.

Docker cloud is a hosted service from Docker to manage Containers. Docker cloud is free to try for 1 private repository and node and is chargeable after that. Docker cloud was originally an acquisition from Tutum. Docker cloud can be used to manage infrastructure nodes in the cloud or nodes in the local data center. For basics on Docker cloud/Tutum, please refer to my earlier blog here. Since Docker cloud was an acquisition, it does not use some of the Docker ecosystem software. Following are some important differences:

Continue reading Docker Cloud for AWS

Docker for AWS beta

In this blog, I will cover “Docker for AWS” beta service launched by Docker. This blog is part of my Docker for AWS series and uses the sample voting application for illustration.

As part of Docker 1.12 announcement, Docker released AWS Docker integration as beta software. With this software, Docker is trying to  simplify AWS integration by better integrating Docker with AWS services like load balancer, security groups, logs etc. Docker launched similar integration service with Microsoft Azure as well. Docker 1.12 RC4 is available as part of this integration, so Swarm mode feature can be used.
Following are some features that Docker has added as part of this integration with AWS:

Continue reading Docker for AWS beta