Category Archives: Google cloud

NEXT 100 Webinar – Top 3 reasons why you should run your enterprise workloads on GKE

I presented this webinar “Top 3 reasons why you should run your enterprise workloads on GKE” at NEXT100 CIO forum earlier this week. Businesses are increasingly moving to Containers and Kubernetes to simplify and speed up their application development and deployment. The slides and demo covers the top reasons why Google Kubernetes engine(GKE) is one of the best Container management platforms for enterprises to deploy their containerized workloads.

Following are the slides and recording:

Recording link



Container Conference Presentation

This week, I did a presentation in Container Conference, Bangalore. The conference was well conducted and it was attended by 400+ quality attendees. I enjoyed some of the sessions and also had fun talking to attendees. The topic I presented was “Deep dive into Kubernetes Networking”. Other than covering Kubernetes networking basics, I also touched on Network control policy, Istio service mesh, hybrid cloud and best practises.



Demo code and Instructions:

Github link

Recording of the Istio section of the demo: (the recording was not at conference)

As always, feedback is welcome.

I was out of blogging action for last 9 months as I was settling into my new Job at Google and I also had to take care of some personal stuff. Things are getting little clear now and I am hoping to start my blogging soon…


My new Journey – Cisco to Google cloud

After an amazing 10+ years in Cisco Systems, I have decided to move on. I have joined Google’s cloud division. I thought its a good time to reflect on my learnings in Cisco and what I am looking forward for the next few years.

Before joining Cisco, I worked in few startups in US. I joined Cisco after I moved back to India. I worked in different development engineering groups in Cisco spanning carrier ethernet, service provider and data center products. I played different roles including Software Engineer, Technical lead/architect, Engineering manager. Cisco is a great company and it has given me lot of good opportunities.

I had a great set of managers in Cisco. I would like to especially call out 2 managers who helped me shape my career. First is Ritesh Dhoot, he is a charismatic leader who helped me understand the business value in everything we do. Second is Bhaskar Jayakrishnan whom I admired for his all-round skills and the tenacity with which he takes the opportunity and runs with it. Both of them gave me lot of freedom to plan/execute work in my own way and also encouraged me to pursue my interests.

I had a great set of colleagues and teams in Cisco. I got the opportunity to lead UCS switch team in Bangalore and saw the team grow pretty fast from 5 folks to 25 folks over a period of 2+ years. I was very humbled by the love and affection that I received from my team and especially the very warm farewell. Following is UCS Bangalore switch team(few folks are missing..) that I am very proud to be part of:


Considering that things have been so good, folks might ask why I left Cisco… Following are some reasons:

  • I had been in Telecom/Networking industry for the last 16+ years and I wanted a change away from this.
  • I have been active in Open source communities over last 4 years. I am an active techno blogger, author of “Mastering CoreOS” book and also a Docker Captain. I found it difficult to match my office work and personal interest.
  • I have been doing development roles all through my career and I wanted to try out roles close to customers.
  • Cisco has been great to survive downturns and reinvent itself many times. I was not convinced about Cisco’s adoption of cloud and the changing Cisco’s strategies in this area.

After I decided to look out, I had a choice of trying out startups or big companies. I did get few opportunities in startups. Even though I had dabbled in the cloud area for 4+ years, it was not at the professional level so I thought it will be good to work in a bigger company to understand the breadth of cloud technologies.

What better company can I ask for than Google. I have always admired Google for their super cool technology and the pace at which they innovate. In the cloud area, Google is lagging behind AWS and Azure and that’s primarily because Google started late in the Cloud domain. Google currently has a lot of focus in the cloud domain and I am confident that Google will catch up to Azure and AWS soon. Following are some reasons why I am very confident about this:

  • Google’s cloud products are already used by Google’s products. All Google’s products like Youtube, Maps, Photos runs in Google’s cloud and the same technology is exposed to end customers through Google cloud products. Each of the above products has 1+ billion users, this makes Google cloud products already proven at scale.
  • Google is a leader in open source technologies like Kubernetes, Tensorflow, Mapreduce and these technologies are incorporated nicely into Google’s cloud products. This gives Google a head-start in areas like Machine learning, Big data and Containers.
  • There are a lot of integration possibilities between Google’s cloud products and Google’s other products and that can provide lot of benefits to consumers on either side.

I started as a Partner Engineer in Google cloud’s Bangalore division. My primary responsibility is technical enablement of Google cloud partners and create appropriate solutions for Google cloud customers. I am hoping to understand customer issues, create solutions and evangelize cloud and Google’e products along the way. Even though  this role entails a breadth of Google cloud technologies,  I will try to have some focus on Docker, Containers, Kubernetes and GKE considering that I am also a Docker captain.

Even though I started only 6 weeks ago in Google, I feel that its been a long time. I already feel that I am in the best place and that there is a lot of learning for me to do. In the short span of time, I have attended 3 conferences, did container presentations, met few customers and partners along the way. I have also passed the “Google cloud architect” certification.

It does feel weird that I am starting on a tangential path compared to my previous background at this stage of my career. There are quite a few challenges that I need to overcome like getting used to a pre-sales role(been in Engineering all along), understand breadth and depth of cloud technologies and finally, prove myself in this new role. I am hoping that this will work out good.

I am looking forward to writing more blogs in the Google cloud and Containers/Docker areas.


Google cloud is hiring. If you are passionate about cloud with relevant experience and interested in creating and selling cloud customer solutions, please reach out to me.

Comparing Docker deployment options in public cloud

Few weeks back, I gave a presentation in Container conference, Bangalore comparing different solutions available to deploy Docker in the public cloud.

Slides are available here. I have also put the steps necessary along with short video for each of the options in the github page here.

Abstract of the talk:

Containers provide portability for applications across private and public clouds. Since there are many options to deploy Docker Containers in public cloud, customers get confused in the decision making process. I will compare Docker machine, Docker Cloud, Docker datacenter, Docker for AWS, Azure and Google cloud, AWS ECS, Google Container engine, Azure Container service. A sample multi-container application will be deployed using the different options. The deployment differences including technical internals for each option will be covered. At the end of the session, the user will be able to choose the right Docker deployment option for their use-case.


  • I have focused mainly on Docker centric options in the comparison.
  • There are few CaaS platforms like Tectonic, Rancher that I have not included since I did not get a chance to try them.
  • Since all the solutions are under active development, some of the gaps will get covered by the solutions in the future.

Microservices Infrastructure using Mantl

Mantl is an Open source project from Cisco and it provides an integrated solution to deploy distributed Microservices. Any company deploying Microservices has to integrate different components before the solution becomes production ready. Mantl makes it easier by integrating the different components and providing the glue software that integrates the components. In this blog, I will cover the following:

  • Distributed Microservice infrastructure components and the need for Mantl.
  • Mantl Architecture.
  • Mantl installation using Vagrant
  • Mantl installation using AWS public cloud

Microservices infrastructure

Following are typical components in Container based Microservices infrastructure:

Continue reading Microservices Infrastructure using Mantl

Baremetal cloud using Packet

Typical Opensource demo applications comes packaged as a Vagrant application which starts a bunch of VMs and does automatic provisioning. I have a Windows machine with Virtualbox and VMWare player installed. Since Virtualbox does not support nested virtualization with 64 bit VMs(More details can be found in my previous blogs on Virtualbox and VMWare player), I use VMWare player to try out demo applications that needs 64 bit VMs. The demo applications typically run on Linux, so running them on Windows with Virtualbox is ruled out. I was recently trying this Mantl project for deploying distributed microservices and I found that it was very slow to run in VMWare player with nested virtualization. I tried to run the application in AWS and I found that AWS does not support nested virtualization(More details can be found here). Then I tried out Google cloud. Even though Google cloud supports nested virtualization, hardware virtualization is disabled on the guest VMs and this prevents running 64 bit VMs inside Google cloud VMs. After I ran out of these options, I stumbled upon the possibility of using baremetal cloud. I used baremetal cloud from Packet and it worked great for my usecase mentioned above. Though this is not a typical use case, I was very happy with the performance and the possibilities that this provides. In this blog, I will share the use cases for baremetal cloud and my experiences with using Packet service.

Bare metal cloud Use case

Typical cloud providers like Amazon, Google, Digitalocean, Microsoft rent out VMs as part of their compute offering. These VMs run on top of a hypervisor. Though the user is guaranteed a specific performance, these VMs share the same resources with other VMs running on the same host machine. With bare metal cloud, the cloud provider hosts machines that the user can rent which is not shared with anyone. Cloud providers provide different configurations for bare metal and the user can choose based on their performance needs and the costing is based on the performance provided by the bare metal server. Following are some advantages that bare metal cloud provides:

Continue reading Baremetal cloud using Packet

Hashicorp Atlas workflow with Vagrant, Packer and Terraform

I have used and loved Vagrant for a long time and I recently used Consul and I was very impressed by both these Devops tools. Recently, I saw some of the videos of Hashiconf and I learnt that Hashicorp has an ecosystem of tools addressing Devops needs and that these tools can be chained together to create complete application delivery platform from development to production. Atlas is Hashicorp’s product that combines its open source tools into a platform and it has a commercial version as well. In this blog, I will cover a development to production workflow for a LAMP application stack using Atlas, Vagrant, Packer and Terraform.

Overview of Vagrant, Packer, Terraform and Atlas


Vagrant provides a repeatable VM development environment. Vagrant integrates well with major hypervisors like Virtualbox, VMWare, HyperV. “Vagrantfile” describes the VM settings as well as initial bootstrap provisioning that needs to be done on the VM. Vagrant also integrates well with other provisioning tools like Chef, Ruby and Ansible to describe the provisioning. Simply by doing “vagrant up”, the complete VM environment is exactly reproduced. The typical problems like “it does not work for me even though its working in your machine” goes away.


Packer is a tool to create machine images for providers like Virtualbox, VMWare, AWS, Google cloud. Packer configuration is described as a JSON file and images for multiple providers can be created in parallel. The typical workflow is for developer to create development environment in Vagrant and once it becomes stable, the production image can be built from Packer. Since the provisioning part is baked into the image, the deployment of production images becomes much faster. Following link describes how Vagrant and Packer fits well together.

Continue reading Hashicorp Atlas workflow with Vagrant, Packer and Terraform