Container Standards

In this blog, I will cover some of the standardization effort that is happening in the Containers area. I will cover some history, current status and also mention how the future looks like. In the next blog, we will look inside ACI and OCI Container images.

Container Standards

Lot of developments in Container area are done as Open source projects. That still does not automatically mean that these projects will become standards. Following are the areas where Container standardization is important:

  • Container image format – Describes how an application is packaged into a Container. The application can be an executable from any programming language. As you would know, Containers packages an application along with all its application dependencies.
  • Container runtime – Describes the environment(namespaces, cgroups etc) necessary to run the Container and the APIs that Container runtime should support.
  • Image signing – Describes how to create Container image digest and to sign these so that Container images can be trusted.
  • Image discovery – Describes alternate approaches to discover Container images other than using registry.
  • Container Networking – This is a pretty complex area and it describes ways to network Containers in same host and across hosts. There are different implementations based on the use-case.

Having common Container standards would allow things like this:

  • Container images can easily be moved across different registries(Docker hub, Quay, AWS registry)
  • Allows wide development of interoperable tools around building and verifying Container images.
  • Container run-times can interoperate. For example, Docker can run Rkt Container image and Rkt can run Docker Container image.
  • Container runtime sets up the Container networking and having a common standard would allow for different networking vendors to easily integrate their networking solution with Containers and also to interoperate with each other.

Container image standards

APPC

The APPC specification started in December 2014 is primarily driven by CoreOS along with a few other community members like Google, Redhat. Rkt, Kurma, and Jetpack are examples of Container runtime implementing APPC. APPC defines Container image format, runtime, signing and discovery.

APPC defines ACI(Application Container image) format for Container images. APPC supports tools like “acbuild” to build Container image, “actool” to validate Container image, “docker2aci” to convert Docker image to ACI image.

ACI images have a rootfs that has the Container filesystem and a manifest file describing the Container image. The manifest file has the following items:

  • Container name, appc version, OS and architecture.
  • App details – exec command, user/group, working directory, event handlers, environment, isolators, mountpoints, exposed ports
  • Dependencies – dependent application images
  • Annotations – metadata

App container executor(ACE) defines how to run Container image and it contains the following:

  • UUID setup: This is a Unique ID for the Pod that contains multiple
    containers. UUID is registered with the metadata service that allows other containers to find each other.
  • Filesystem setup: A filesystem is created in its own namespace.
  • Volume setup: These are files to be mounted to the container.
  • Networking: This specifies Container networking to the host and other
    Containers.
  • Isolators: This controls the CPU and memory limit for the Container.

APPC supports simple image discovery by specifying the complete URL using http or by using a metatag.

OCI

OCI is the Open Container Initiative open source project started in April 2015 by
Docker and has members from all major companies including Docker and CoreOS. Runc is an implementation of OCI. OCI started off with Container runtime standard, but now it is also developing image format standard.

OCI image format has the following sections:

  • Config: This specifies exec command, user/group, working directory, event handlers, environment, isolators, mountpoints, exposed ports
  • Manifest: This has version, type, digest, layers, annotations
  • File list: This has the list of files in the Container

OCI runtime specifies container lifecycle operations(query state, create, start, kill, delete) and setting up the environment to run the Container including namespace, cgroups, volume and networking.

Container image standards current status

Initially APPC was developed by CoreOS and few other community members. Then OCI was developed by Docker, CoreOS and majority of Container community. It initially looked like there was lot of overlap between the two. The initial focus of OCI was around Container runtime, so it looked like APPC will continue to define other Container aspects like image format and discovery. Based on the recent OCI image format announcement, OCI is adding a common image format combining Docker and APPC image formats. In future, it looks like APPC and OCI will converge and OCI will cover all Container image related standards including image format, runtime, discovery, digest and signing. This is certainly a very good thing for the Container industry.

Container networking standards

CNI

CNI is the Container networking interface open source project developed by CoreOS, Google along with a few other community members to provide networking facility for
Containers as a pluggable and extensible mechanism. CoreOS’s Container runtime(Rkt) and Kubernetes uses CNI to establish Container networking.

Following picture shows relationship between Container runtime, CNI and Network plugins:

standards1

Following are some details on CNI:

  • The CNI interface calls the API of the CNI plugin to set up Container
    networking.
  • The CNI plugin is responsible for creating the network interface to the
    container.
  • The CNI plugin calls the IPAM plugin to set up the IP address for the
    container.
  • The CNI plugin needs to implement an API for container network creation
    and deletion.
  • The plugin type and parameters are specified as a JSON file that the
    Container runtime reads and sets up.
  • Available CNI plugins are Bridge, macvlan, ipvlan, and ptp. Available IPAM
    plugins are host-local and DHCP. CNI plugins and the IPAM plugin can be
    used in any combination.
  • External CNI plugins such as Flannel and Weave are also supported. External
    plugins reuse the bridge plugin to set up the final networking.

Following are sample CNI configuration files that is used by CNI to setup the Container networking:

bridge json with host-local ipam:

This sets up bridge network with host-local IPAM.

{
  "name": "mynet",
  "type": "bridge",
  "bridge": "mynet0",
  "isGateway": true,
  "ipMasq": true,
  "ipam": {
    "type": "host-local",
    "subnet": "10.10.0.0/16"
  }
} 

flannel CNI config:

This sets up Flannel network. IP address management is done by Flannel itself.

{
  "name": "flannelnet",
  "type": "flannel"
}

ipvlan with dhcp config:

This sets up ipvlan network with IP addresses allotted by dhcp.

  
{
  "name": "lan",
  "type": "macvlan",
  "master": "eth0",
  "ipam": {
	"type": "dhcp"
  }
}

Libnetwork

Libnetwork is an open source project started by Docker and a few other community
members to deliver a robust Container Network Model. Following are some objectives:

  • Keep networking as a library separate from the Container runtime.
  • Provide Container connectivity in the same host as well as across hosts.
  • Networking implementation will be done as a plugin implemented
    by drivers. The plugin mechanism is provided to add new third-party
    drivers easily.
  • Control IP address assignment for the Containers using local IPAM drivers
    and plugins.
  • Supported local drivers are bridge, overlay, macvlan, ipvlan. Supported remote drivers are Weave, calico etc.

Following picture shows relationship between Container runtime, Libnetwork and Network plugins:

standards2

There are three primary components in Libnetwork:

  • Sandbox: All networking functionality is encapsulated in a sandbox.
    This can be implemented using networking namespace or a similar function.
  • Endpoint: This attaches sandbox to the network.
  • Network: Multiple endpoints in the same network can talk to each other.

Container networking standards current status

Both Libnetwork and CNI have similar objective of creating extensible Container networking model. It looks like Libnetwork and CNI will both exist for different use-cases. Since the internal design is different between these two approaches, it will be difficult for them to converge. Docker uses Libnetwork and CoreOS uses CNI. Kubernetes has decided to use CNI and they have explained their reasons in this blog. Networking plugins like Weave and Calico will work with both CNI and Libnetwork and some plugins like Flannel work only with CNI standard.

Summary

Some folks feel that Standards stifles innovation and differentiation. I feel that as long as standards leave enough scope for implementation differences and different use cases, it actually helps in reusing common technologies and which in turn helps in faster innovation. Standards need to be supported by running implementations around the same time which encourages faster adoption. Its a great thing that Container standards are taking shape with working implementations.

References

2 thoughts on “Container Standards

Leave a comment