I did the following presentation “Devops with Kubernetes” in Kubernetes Sri Lanka inaugural meetup earlier this week. Kubernetes is one of the most popular open source projects in the IT industry currently. Kubernetes abstractions, design patterns, integrations and extensions make it very elegant for Devops. The slides delve little deep on these topics.
I presented this webinar “Top 3 reasons why you should run your enterprise workloads on GKE” at NEXT100 CIO forum earlier this week. Businesses are increasingly moving to Containers and Kubernetes to simplify and speed up their application development and deployment. The slides and demo covers the top reasons why Google Kubernetes engine(GKE) is one of the best Container management platforms for enterprises to deploy their containerized workloads.
Following are the slides and recording:
This week, I did a presentation in Container Conference, Bangalore. The conference was well conducted and it was attended by 400+ quality attendees. I enjoyed some of the sessions and also had fun talking to attendees. The topic I presented was “Deep dive into Kubernetes Networking”. Other than covering Kubernetes networking basics, I also touched on Network control policy, Istio service mesh, hybrid cloud and best practises.
Demo code and Instructions:
Recording of the Istio section of the demo: (the recording was not at conference)
As always, feedback is welcome.
I was out of blogging action for last 9 months as I was settling into my new Job at Google and I also had to take care of some personal stuff. Things are getting little clear now and I am hoping to start my blogging soon…
Few weeks back, I gave a presentation in Container conference, Bangalore comparing different solutions available to deploy Docker in the public cloud.
Abstract of the talk:
Containers provide portability for applications across private and public clouds. Since there are many options to deploy Docker Containers in public cloud, customers get confused in the decision making process. I will compare Docker machine, Docker Cloud, Docker datacenter, Docker for AWS, Azure and Google cloud, AWS ECS, Google Container engine, Azure Container service. A sample multi-container application will be deployed using the different options. The deployment differences including technical internals for each option will be covered. At the end of the session, the user will be able to choose the right Docker deployment option for their use-case.
- I have focused mainly on Docker centric options in the comparison.
- There are few CaaS platforms like Tectonic, Rancher that I have not included since I did not get a chance to try them.
- Since all the solutions are under active development, some of the gaps will get covered by the solutions in the future.
Kubernetes CRI(Container runtime interface) is introduced in experimental mode in Kubernetes 1.15 release. Kubernetes CRI introduces a common Container runtime layer that allows for Kubernetes orchestrator to work with multiple Container runtimes like Docker, Rkt, Runc, Hypernetes etc. CRI makes it easy to plug in a new Container runtime to Kubernetes. Minikube project simplifies Kubernetes installation for development and testing purposes. Minikube project allows Kubernetes master and worker components to run in a single VM which facilitates developers and users of Kubernetes to easily try out Kubernetes. In this blog, I will cover basics of Minikube usage, overview of CRI and steps to try out CRI with Minikube.
Kubernetes software is composed of multiple components and beginners normally get overwhelmed with the installation steps. It is also easier to have a lightweight Kubernetes environment for development and testing purposes. Minikube has all Kubernetes components in a single VM that runs in the local laptop. Both master and worker functionality is combined in the single VM.
Following are some major features present in Minikube:
The most popular Container Orchestration solutions available in the market are Kubernetes, Swarm and Mesos. I have used Kubernetes and Swarm, but never got a chance to use Mesos or DC/OS. There were a bunch of questions I had about Mesos and DC/OS and I never got the time to explore that. Recently, I saw the announcement about Mesosphere opensourcing DC/OS and I found this as a perfect opportunity for me to try out Opensource DC/OS. In this blog, I have captured the answers to questions I had regarding Mesos and DC/OS. In the next blog, I will cover some hands-on that I did with Opensource DC/OS.
What is the relationship between Apache Mesos, Opensource DC/OS and Enterprise DC/OS?
Apache Mesos is the Opensource distributed orchestrator for Container as well as non-Container workloads. Both Opensource DC/OS and Enterprise DC/OS are built on top of Apache Mesos. Opensource DC/OS adds Service discovery, Universe package for different frameworks, CLI and GUI support for management and Volume support for persistent storage. Enterprise DC/OS adds enterprise features around security, performance, networking, compliance, monitoring and multi-team support that the Opensource DC/OS project does not include. Complete list of differences between Opensource and Enterprise DC/OS are captured here.
What does Mesosphere do and how it is related to Apache Mesos?
Mesosphere company has products that are built on top of Apache Mesos. Lot of folks working in Mesosphere contribute to both Apache Mesos and Opensource DC/OS. Mesosphere has the following products currently:
- DC/OS Enterprise – Orchestration solution
- Velocity- CI, CD solution
- Infinity – Big data solution
Why DC/OS is called OS?
Sometimes folks get confused thinking Mesos being a Container optimized OS like CoreOS, Atomic. Mesos is not a Container optimized OS. Similar to the way Desktop OS provides resource management in a single host, DC/OS provides cluster management across entire cluster. Mesos master(including first level scheduler) and agent are perceived as kernel components and user space components include frameworks, user space applications, dns and load balancers. The kernel provides primitives for the frameworks.
What are Mesos frameworks and why they are needed?
I recently saw the Openstack self-healing demo from CoreOS team using Tectonic(Stackanetes project) and I kind of felt that the boundary between Containers and VMs are blurring. In this blog, I discuss the usecase of deploying Openstack using Containers.
We typically think of Openstack as a VM Orchestration tool. Openstack is composed of numerous services and deploying Openstack as one monolithic blob is pretty complex and difficult to maintain. The demo described showed how Containers simplify Openstack deployment. This is a great example of using Microservices architecture to simplify infrastructure deployment.
Following diagram shows the Openstack deployment model using Containers. The diagram below shows how Openstack service containers deploys user VM. The user VMs deployed using Openstack can run Containers as well..
Following are some notes on the architecture:
- Openstack services like Nova, Heat, Horizon are containerized using Openstack Kolla project as Docker Containers. Some Openstack services like Nova is composed of multiple Containers.
- Infrastructure components like Ceph, Openvswitch, Mongodb are also deployed as Containers.
- For Container deployment, Openstack natively uses Ansible. Kubernetes can also be used for Orchestration.
- Using Containers for Openstack service containers gives all the build, ship and deploy advantages of Containers.
- Using orchestration solution like Kubernetes gives all the resiliency and deployment advantages for Openstack services.
This work also shows how Containers and VMs can work closely with each other for lot of use-cases. There are other Openstack projects like Magnum and Kuryr where there is an intersection between Containers and VMs. Magnum project deals with Container orchestration using Openstack and Kuryr project deals with doing Container networking using Openstack Neutron.