Category Archives: CoreOS

Kubernetes CRI and Minikube

Kubernetes CRI(Container runtime interface) is introduced in experimental mode in Kubernetes 1.15 release. Kubernetes CRI introduces a common Container runtime layer that allows for Kubernetes orchestrator to work with multiple Container runtimes like Docker, Rkt, Runc, Hypernetes etc. CRI makes it easy to plug in a new Container runtime to Kubernetes. Minikube project simplifies Kubernetes installation for development and testing purposes. Minikube project allows Kubernetes master and worker components to run in a single VM which facilitates developers and users of Kubernetes to easily try out Kubernetes. In this blog, I will cover basics of Minikube usage, overview of CRI and steps to try out CRI with Minikube.

Minikube

Kubernetes software is composed of multiple components and beginners normally get overwhelmed with the installation steps. It is also easier to have a lightweight Kubernetes environment for development and testing purposes. Minikube has all Kubernetes components in a single VM that runs in the local laptop. Both master and worker functionality is combined in the single VM.

Following are some major features present in Minikube:

Continue reading Kubernetes CRI and Minikube

Looking inside Container Images

This blog is a continuation of my previous blog on Container standards. In this blog, we will look inside a Container image to understand the filesystem and manifest files that describes the Container. We will cover Container images in Docker, APPC and OCI formats. As mentioned in previous blog, these Container images will converge into OCI format in the long run.

I have picked two Containers for this blog: “nginx”which is a standard webserver and “smakam/hellocounter” which is a Python web application.

Docker format:

To see Container content in Docker format, do the following:

docker save nginx > nginx.tar
tar -xvf nginx.tar

Following files are present:

  • manifest.json – Describes filesystem layers and name of json file that has the Container properties.
  • <id>.json – Container properties
  • <layer directory> – Each “layerid” directory contains json file describing layer property and filesystem associated with that layer. Docker stores Container images as layers to optimize storage space by reusing layers across images.

Following are some important Container properties that we can see in the JSON file:

Continue reading Looking inside Container Images

Container Standards

In this blog, I will cover some of the standardization effort that is happening in the Containers area. I will cover some history, current status and also mention how the future looks like. In the next blog, we will look inside ACI and OCI Container images.

Container Standards

Lot of developments in Container area are done as Open source projects. That still does not automatically mean that these projects will become standards. Following are the areas where Container standardization is important:

  • Container image format – Describes how an application is packaged into a Container. The application can be an executable from any programming language. As you would know, Containers packages an application along with all its application dependencies.
  • Container runtime – Describes the environment(namespaces, cgroups etc) necessary to run the Container and the APIs that Container runtime should support.
  • Image signing – Describes how to create Container image digest and to sign these so that Container images can be trusted.
  • Image discovery – Describes alternate approaches to discover Container images other than using registry.
  • Container Networking – This is a pretty complex area and it describes ways to network Containers in same host and across hosts. There are different implementations based on the use-case.

Having common Container standards would allow things like this:

Continue reading Container Standards

Macvlan and ipvlan in CoreOS

This is a continuation of my previous blog on macvlan and ipvlan Linux network drivers. In this blog, I will cover usage of macvlan and ipvlan network plugins with CoreOS Rkt Container runtime and CNI(Container network interface).

Rkt and CNI

Rkt is another Container runtime similar to Docker. CNI is Container networking standard proposed by CoreOS and few other companies. CNI exposes standard APIs that network plugins needs to implement. CNI supports plugins like ptp, bridge, macvlan, ipvlan and flannel. IPAM can be managed by a second level plugin that CNI plugin calls.

Pre-requisites

We can either use multi-node CoreOS cluster or a single node CoreOS for the macvlan example used in this blog. I have created three CoreOS cluster using Vagrant. Following is the cloud-config user-data that I used.

macvlan and ipvlan config

Following is the relevant section of Cloud-config for macvlan:

- path: "/etc/rkt/net.d/20-lannet.conf"
    permissions: "0644"
    owner: "root"
    content: |
      {
        "name": "lannet",
        "type": "macvlan",
        "master": "eth0",
        "ipam": {
          "type": "host-local",
          "subnet": "20.1.1.0/24"
        }
      }

In the above cloud-config, we specify the properties of macvlan plugin that includes the parent interface over which macvlan will reside. We use IPAM type as “host-local” here, this means IP address will be assigned from within the range “20.1.1.0/24” as specified in the configuration. The macvlan type defaults to “bridge”.

Following is the relevant section of cloud-config for ipvlan:

Continue reading Macvlan and ipvlan in CoreOS

Openstack Deployment using Containers

I recently saw the Openstack self-healing demo from CoreOS team using Tectonic(Stackanetes project) and I kind of felt that the boundary between Containers and VMs are blurring. In this blog, I discuss the usecase of deploying Openstack using Containers.

We typically think of Openstack as a VM Orchestration tool. Openstack is composed of numerous services and deploying Openstack as one monolithic blob is pretty complex and difficult to maintain. The demo described showed how Containers simplify Openstack deployment. This is a great example of using Microservices architecture to simplify infrastructure deployment.

Following diagram shows the Openstack deployment model using Containers. The diagram below shows how Openstack service containers deploys user VM. The user VMs deployed using Openstack can run Containers as well..

vm_container1.PNG

Following are some notes on the architecture:

  • Openstack services like Nova, Heat, Horizon are containerized using Openstack Kolla project as Docker Containers. Some Openstack services like Nova is composed of multiple Containers.
  • Infrastructure components like Ceph, Openvswitch, Mongodb are also deployed as Containers.
  • For Container deployment, Openstack natively uses Ansible. Kubernetes can also be used for Orchestration.
  • Using Containers for Openstack service containers gives all the build, ship and deploy advantages of Containers.
  • Using orchestration solution like Kubernetes gives all the resiliency and deployment advantages for Openstack services.

This work also shows how Containers and VMs can work closely with each other for lot of use-cases. There are other Openstack projects like Magnum and Kuryr where there is an intersection between Containers and VMs. Magnum project deals with Container orchestration using Openstack and Kuryr project deals with doing Container networking using Openstack Neutron.

References:

Opensource Meetup Presentation

I did a presentation on CoreOS and Service Discovery in Opensource Meetup group last week. Following are related slides and demo recording.

CoreOS Overview and Current Status

Slides:

CoreOS HA Demo recording:

Scripts used are available here.

Service Discovery using etcd, Consul and Kubernetes

Slides:

Consul Service Discovery Demo:

Following are the commands to start Consul Container, Registrator Container and 3 Container services.

docker run -d -p 8500:8500 -p 192.168.0.1:53:8600/udp -p 8400:8400 gliderlabs/consul-server -node myconsul -bootstrap
docker run -d -v /var/run/docker.sock:/tmp/docker.sock --net=host gliderlabs/registrator -internal consul://localhost:8500

docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http1" -e "SERVICE_80_CHECK_HTTP=true" -e "SERVICE_80_CHECK_HTTP=/" --name=nginx1 nginx
docker run -d -p :80 -e "SERVICE_80_NAME=http" -e "SERVICE_80_ID=http2" -e "SERVICE_80_CHECK_HTTP=true" -e "SERVICE_80_CHECK_HTTP=/" --name=nginx2 nginx
docker run -ti smakam/myubuntu:v3 bash

Baremetal cloud using Packet

Typical Opensource demo applications comes packaged as a Vagrant application which starts a bunch of VMs and does automatic provisioning. I have a Windows machine with Virtualbox and VMWare player installed. Since Virtualbox does not support nested virtualization with 64 bit VMs(More details can be found in my previous blogs on Virtualbox and VMWare player), I use VMWare player to try out demo applications that needs 64 bit VMs. The demo applications typically run on Linux, so running them on Windows with Virtualbox is ruled out. I was recently trying this Mantl project for deploying distributed microservices and I found that it was very slow to run in VMWare player with nested virtualization. I tried to run the application in AWS and I found that AWS does not support nested virtualization(More details can be found here). Then I tried out Google cloud. Even though Google cloud supports nested virtualization, hardware virtualization is disabled on the guest VMs and this prevents running 64 bit VMs inside Google cloud VMs. After I ran out of these options, I stumbled upon the possibility of using baremetal cloud. I used baremetal cloud from Packet and it worked great for my usecase mentioned above. Though this is not a typical use case, I was very happy with the performance and the possibilities that this provides. In this blog, I will share the use cases for baremetal cloud and my experiences with using Packet service.

Bare metal cloud Use case

Typical cloud providers like Amazon, Google, Digitalocean, Microsoft rent out VMs as part of their compute offering. These VMs run on top of a hypervisor. Though the user is guaranteed a specific performance, these VMs share the same resources with other VMs running on the same host machine. With bare metal cloud, the cloud provider hosts machines that the user can rent which is not shared with anyone. Cloud providers provide different configurations for bare metal and the user can choose based on their performance needs and the costing is based on the performance provided by the bare metal server. Following are some advantages that bare metal cloud provides:

Continue reading Baremetal cloud using Packet