Few weeks back, I gave a presentation in Container conference, Bangalore comparing different solutions available to deploy Docker in the public cloud.
Slides are available here. I have also put the steps necessary along with short video for each of the options in the github page here.
Abstract of the talk:
Containers provide portability for applications across private and public clouds. Since there are many options to deploy Docker Containers in public cloud, customers get confused in the decision making process. I will compare Docker machine, Docker Cloud, Docker datacenter, Docker for AWS, Azure and Google cloud, AWS ECS, Google Container engine, Azure Container service. A sample multi-container application will be deployed using the different options. The deployment differences including technical internals for each option will be covered. At the end of the session, the user will be able to choose the right Docker deployment option for their use-case.
- I have focused mainly on Docker centric options in the comparison.
- There are few CaaS platforms like Tectonic, Rancher that I have not included since I did not get a chance to try them.
- Since all the solutions are under active development, some of the gaps will get covered by the solutions in the future.
Mantl is an Open source project from Cisco and it provides an integrated solution to deploy distributed Microservices. Any company deploying Microservices has to integrate different components before the solution becomes production ready. Mantl makes it easier by integrating the different components and providing the glue software that integrates the components. In this blog, I will cover the following:
- Distributed Microservice infrastructure components and the need for Mantl.
- Mantl Architecture.
- Mantl installation using Vagrant
- Mantl installation using AWS public cloud
Following are typical components in Container based Microservices infrastructure:
Continue reading Microservices Infrastructure using Mantl
Typical Opensource demo applications comes packaged as a Vagrant application which starts a bunch of VMs and does automatic provisioning. I have a Windows machine with Virtualbox and VMWare player installed. Since Virtualbox does not support nested virtualization with 64 bit VMs(More details can be found in my previous blogs on Virtualbox and VMWare player), I use VMWare player to try out demo applications that needs 64 bit VMs. The demo applications typically run on Linux, so running them on Windows with Virtualbox is ruled out. I was recently trying this Mantl project for deploying distributed microservices and I found that it was very slow to run in VMWare player with nested virtualization. I tried to run the application in AWS and I found that AWS does not support nested virtualization(More details can be found here). Then I tried out Google cloud. Even though Google cloud supports nested virtualization, hardware virtualization is disabled on the guest VMs and this prevents running 64 bit VMs inside Google cloud VMs. After I ran out of these options, I stumbled upon the possibility of using baremetal cloud. I used baremetal cloud from Packet and it worked great for my usecase mentioned above. Though this is not a typical use case, I was very happy with the performance and the possibilities that this provides. In this blog, I will share the use cases for baremetal cloud and my experiences with using Packet service.
Bare metal cloud Use case
Typical cloud providers like Amazon, Google, Digitalocean, Microsoft rent out VMs as part of their compute offering. These VMs run on top of a hypervisor. Though the user is guaranteed a specific performance, these VMs share the same resources with other VMs running on the same host machine. With bare metal cloud, the cloud provider hosts machines that the user can rent which is not shared with anyone. Cloud providers provide different configurations for bare metal and the user can choose based on their performance needs and the costing is based on the performance provided by the bare metal server. Following are some advantages that bare metal cloud provides:
Continue reading Baremetal cloud using Packet
I have used and loved Vagrant for a long time and I recently used Consul and I was very impressed by both these Devops tools. Recently, I saw some of the videos of Hashiconf and I learnt that Hashicorp has an ecosystem of tools addressing Devops needs and that these tools can be chained together to create complete application delivery platform from development to production. Atlas is Hashicorp’s product that combines its open source tools into a platform and it has a commercial version as well. In this blog, I will cover a development to production workflow for a LAMP application stack using Atlas, Vagrant, Packer and Terraform.
Overview of Vagrant, Packer, Terraform and Atlas
Vagrant provides a repeatable VM development environment. Vagrant integrates well with major hypervisors like Virtualbox, VMWare, HyperV. “Vagrantfile” describes the VM settings as well as initial bootstrap provisioning that needs to be done on the VM. Vagrant also integrates well with other provisioning tools like Chef, Ruby and Ansible to describe the provisioning. Simply by doing “vagrant up”, the complete VM environment is exactly reproduced. The typical problems like “it does not work for me even though its working in your machine” goes away.
Packer is a tool to create machine images for providers like Virtualbox, VMWare, AWS, Google cloud. Packer configuration is described as a JSON file and images for multiple providers can be created in parallel. The typical workflow is for developer to create development environment in Vagrant and once it becomes stable, the production image can be built from Packer. Since the provisioning part is baked into the image, the deployment of production images becomes much faster. Following link describes how Vagrant and Packer fits well together.
Continue reading Hashicorp Atlas workflow with Vagrant, Packer and Terraform
In this blog, I will cover the Google container engine service that I tried out.
- Need Google cloud account.
- Install Google cloud SDK.
Google container engine is not available in the normal gcloud SDK installation. To use container engine service, we need to update preview component.
$ gcloud components update preview
I followed the 2 examples mentioned in the container engine documentation.
In this example, we create a cluster which has a single master and single worker node. We create a pod running WordPress container in the cluster and expose the WordPress service to external world. Since there is only 1 pod, we dont create a service.
Following are the commands used:
Continue reading Kubernetes and Google container engine
In this blog, I will cover the steps to run Kubernetes on Google compute VM. I used the steps mentioned here.
- Need Google cloud account.
- Install Google cloud SDK.
First step is to download and unzip Kubernetes tar file from here.
Next, we create the cluster using the provided script.
The above script creates the clusters with 1 master and 4 minions. Also, it sets up all necessary services both in master and minions node.
Lets look at the VMs created:
Continue reading Kubernetes on Google cloud
Earlier, I had written a blog on Docker Orchestration. This is a pretty new area and different solutions are being developed to address this problem. Few weeks back, I had written a blog on AWS EC2 Container service. Kubernetes is a Docker Orchestration engine used to manage a cluster of Containers. Google initially developed Kubernetes, currently its an open source project and source code is available here. Google Cloud’s Container engine uses Kubernetes to manage Docker Containers. Kubernetes can be used standalone or with any Cloud service like AWS, EC2.
Following are basic building blocks within Kubernetes:
- Cluster(master and minion) – This is the cluster of machines where Container services are launched on. There is 1 master node and the other nodes are called as worker nodes or minions. The master node runs etcd configuration database service, scheduler to schedule the containers, api server for external clients to talk to, replication controller to manage the state of containers. The minion node runs a slave agent to talk to the master node.
- Pods – can be a single container or a collection of containers. Containers within a pod share same characteristics and are brought up and teared down together. They are normally launched on same minion. An example could be a pod containing redis master and slave database containers. Pod configuration is defined as a json file.
- Service – Service is an abstraction over Pod that is useful for Service discovery and exposing environment variables to other services. Example could be a database service exposing port numbers to web service.
- Labels – Labels are used with Pods and Services for easier management of Containers through filters. Rather than managing individual Pods and Services, Containers can be managed at Label level. For example, we can say destroy all “frontend” labels.
Continue reading Kubernetes – Overview