Kubernetes CRI(Container runtime interface) is introduced in experimental mode in Kubernetes 1.15 release. Kubernetes CRI introduces a common Container runtime layer that allows for Kubernetes orchestrator to work with multiple Container runtimes like Docker, Rkt, Runc, Hypernetes etc. CRI makes it easy to plug in a new Container runtime to Kubernetes. Minikube project simplifies Kubernetes installation for development and testing purposes. Minikube project allows Kubernetes master and worker components to run in a single VM which facilitates developers and users of Kubernetes to easily try out Kubernetes. In this blog, I will cover basics of Minikube usage, overview of CRI and steps to try out CRI with Minikube.
Kubernetes software is composed of multiple components and beginners normally get overwhelmed with the installation steps. It is also easier to have a lightweight Kubernetes environment for development and testing purposes. Minikube has all Kubernetes components in a single VM that runs in the local laptop. Both master and worker functionality is combined in the single VM.
Following are some major features present in Minikube:
Continue reading Kubernetes CRI and Minikube
In this blog, I will cover some of the standardization effort that is happening in the Containers area. I will cover some history, current status and also mention how the future looks like. In the next blog, we will look inside ACI and OCI Container images.
Lot of developments in Container area are done as Open source projects. That still does not automatically mean that these projects will become standards. Following are the areas where Container standardization is important:
- Container image format – Describes how an application is packaged into a Container. The application can be an executable from any programming language. As you would know, Containers packages an application along with all its application dependencies.
- Container runtime – Describes the environment(namespaces, cgroups etc) necessary to run the Container and the APIs that Container runtime should support.
- Image signing – Describes how to create Container image digest and to sign these so that Container images can be trusted.
- Image discovery – Describes alternate approaches to discover Container images other than using registry.
- Container Networking – This is a pretty complex area and it describes ways to network Containers in same host and across hosts. There are different implementations based on the use-case.
Having common Container standards would allow things like this:
Continue reading Container Standards
This is a continuation of my previous blog on macvlan and ipvlan Linux network drivers. In this blog, I will cover usage of macvlan and ipvlan network plugins with CoreOS Rkt Container runtime and CNI(Container network interface).
Rkt and CNI
Rkt is another Container runtime similar to Docker. CNI is Container networking standard proposed by CoreOS and few other companies. CNI exposes standard APIs that network plugins needs to implement. CNI supports plugins like ptp, bridge, macvlan, ipvlan and flannel. IPAM can be managed by a second level plugin that CNI plugin calls.
We can either use multi-node CoreOS cluster or a single node CoreOS for the macvlan example used in this blog. I have created three CoreOS cluster using Vagrant. Following is the cloud-config user-data that I used.
macvlan and ipvlan config
Following is the relevant section of Cloud-config for macvlan:
- path: "/etc/rkt/net.d/20-lannet.conf"
In the above cloud-config, we specify the properties of macvlan plugin that includes the parent interface over which macvlan will reside. We use IPAM type as “host-local” here, this means IP address will be assigned from within the range “22.214.171.124/24” as specified in the configuration. The macvlan type defaults to “bridge”.
Following is the relevant section of cloud-config for ipvlan:
Continue reading Macvlan and ipvlan in CoreOS
I did a presentation on CoreOS and Service Discovery in Opensource Meetup group last week. Following are related slides and demo recording.
CoreOS Overview and Current Status
CoreOS HA Demo recording:
Scripts used are available here.
Service Discovery using etcd, Consul and Kubernetes
Consul Service Discovery Demo:
Following are the commands to start Consul Container, Registrator Container and 3 Container services.
docker run -d -p 8500:8500 -p 192.168.0.1:53:8600/udp -p 8400:8400 gliderlabs/consul-server -node myconsul -bootstrap
docker run -d -v /var/run/docker.sock:/tmp/docker.sock --net=host gliderlabs/registrator -internal consul://localhost:8500
docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http1" -e "SERVICE_80_CHECK_HTTP=true" -e "SERVICE_80_CHECK_HTTP=/" --name=nginx1 nginx
docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http2" -e "SERVICE_80_CHECK_HTTP=true" -e "SERVICE_80_CHECK_HTTP=/" --name=nginx2 nginx
docker run -ti smakam/myubuntu:v3 bash
Last 6 months, I have been blogging very little since I was busy writing a book on CoreOS. The book is available for pre-ordering now from the publisher website, Amazon US and Amazon India. Discounts are available till Feb 29th from publisher and discount code is available here. The tentative publishing date is early end of Feb/early March.
Why I wrote the book other than the fact that I can make some money out of it?
I started blogging around later part of 2013. The first open source project that I tried was Opendaylight. It was very exciting to create virtual networks using Mininet and manage it with Openflow and Opendaylight. In early 2014, I focused on cloud technologies like AWS, Google Cloud and Openstack and learnt how Public and Private clouds revolutionized how we consume resources. Around August 2014, I started spending time on Devops and learnt how tools like Ansible eased the management pain not just in server infrastructure but also in networking infrastructure. From December 2014, I started spending time on Docker and CoreOS. The first Docker version I used was 1.3.1(Now its 1.9) and first CoreOS version I used was 500.x(Now its 933.x). Even though Microservices and Containers were present before, Docker made Containers easier to use which in turn made Microservices popular. CoreOS pretty much invented the Cluster OS or Container-optimized OS category and provided OS and tools that ease Container deployment. By writing this book, I felt that I could connect the dots between SDN, Cloud, Devops technologies that I focussed on earlier with Docker and CoreOS.
What are some questions the book tries to answer?
What is a Container-Optimized OS?
Why CoreOS and Containers are needed?
How to deploy microservices and distributed applications?
How to setup, maintain and update CoreOS cluster?
What role does key CoreOS services etcd, fleet, systemd play?
How does Docker and CoreOS manage networking and storage?
What role does standards play in Containers?
Why there are multiple Container runtimes like Docker and Rkt?
How does Kubernetes and Swarm orchestrate Container deployment?
How is Openstack, AWS ECS, Google Container engine, Tectonic related to CoreOS and Docker?
How to debug Containers and CoreOS?
What are the production considerations when deploying microservices?
What did I learn?
That there is a lot more to learn..
Open source is very powerful and it aids innovation and collaboration
Trying hands-on is more important than reading through software manuals
Can learn a lot from Conference Video recordings/slides, blogs
Need lot more patience for writing a book when compared to writing a blog
It is good to do something different part-time from daily office job to keep life interesting
If you read the book or read few chapters of the book, please feel free to provide your feedback..
Thanks to CoreOS, Docker and Container community for the amazing technologies that they have developed. Big thanks to Open source community for making software easily accessible to everyone.
1 of the big announcements in Dockercon 2015 was the Open Container project(OCP). OCP is an Opensource project under Linux foundation to define a common Container format. Container format, runtime and platform mean different things. There are many Container formats, runtime and multiple acronyms surrounding it. In this blog, I have tried to capture my understanding around these. I have not discussed about traditional Linux containers in this blog. This is how I see the relationship between Container formats, Container runtime and Container platforms.
Continue reading Containers – Format, Runtime and Platform
This blog is part of my ongoing series on Docker containers. In my previous blog, I covered LXC. When I tried out LXC, I realized that there are lots of similarities between Docker and LXC. Also, I saw a recent announcement about Rkt which is another Container runtime technology. In this blog, I have tried to answer multiple questions that I had about these technologies based on reading through the reference materials mentioned below. This is a pretty controversial topic as folks have strong opinions about these technologies, I have tried to keep it as neutral as possible.
How is Container management different from Container technologies?
I found this diagram from Docker blog very helpful in answering the above question.
Linux kernel has support for Container technologies like namespaces, cgroups etc. Docker, LXC and Rocket use the technologies available in Linux kernel to manage the lifecycle of the Container. Container management involves Container creation, deletion and modification, image format and the tools around it. Before Docker version 0.9, Docker was using LXC to interact with Linux kernel. From Docker version 0.9, Docker directly interacts with Linux kernel using libcontainer interface that they developed.
How is Docker different from LXC?
Continue reading Containers – Docker, LXC and Rkt