This link captures the slides on Docker release 1.11 release that I presented at Docker Meetup, Bangalore on June 4, 2016.
Category Archives: ipvlan
Docker macvlan and ipvlan network plugins
This is a continuation of my previous blog on macvlan and ipvlan Linux network drivers. Docker has added support for macvlan and ipvlan drivers and its currently in experimental mode as of Docker release 1.11.
Example used in this blog
In this example, we will use Docker macvlan and ipvlan network plugins for Container communication across hosts. To illustrate macvlan and ipvlan concepts and usage, I have created the following example.
Following are details of the setup:
Macvlan and ipvlan in CoreOS
This is a continuation of my previous blog on macvlan and ipvlan Linux network drivers. In this blog, I will cover usage of macvlan and ipvlan network plugins with CoreOS Rkt Container runtime and CNI(Container network interface).
Rkt and CNI
Rkt is another Container runtime similar to Docker. CNI is Container networking standard proposed by CoreOS and few other companies. CNI exposes standard APIs that network plugins needs to implement. CNI supports plugins like ptp, bridge, macvlan, ipvlan and flannel. IPAM can be managed by a second level plugin that CNI plugin calls.
Pre-requisites
We can either use multi-node CoreOS cluster or a single node CoreOS for the macvlan example used in this blog. I have created three CoreOS cluster using Vagrant. Following is the cloud-config user-data that I used.
macvlan and ipvlan config
Following is the relevant section of Cloud-config for macvlan:
- path: "/etc/rkt/net.d/20-lannet.conf" permissions: "0644" owner: "root" content: | { "name": "lannet", "type": "macvlan", "master": "eth0", "ipam": { "type": "host-local", "subnet": "20.1.1.0/24" } }
In the above cloud-config, we specify the properties of macvlan plugin that includes the parent interface over which macvlan will reside. We use IPAM type as “host-local” here, this means IP address will be assigned from within the range “20.1.1.0/24” as specified in the configuration. The macvlan type defaults to “bridge”.
Following is the relevant section of cloud-config for ipvlan:
Macvlan and IPvlan basics
Macvlan and ipvlan are Linux network drivers that exposes underlay or host interfaces directly to VMs or Containers running in the host. In this blog, I will cover basics of macvlan and ipvlan, compare macvlan and ipvlan to Linux bridge and sub-interfaces and also show how to create these interfaces in Linux system. In the next set of blogs, I will cover how macvlan and ipvlan interfaces are used in Docker and CoreOS.
VM and Container networking
When running a baremetal server, host networking can be straightforward with few ethernet interfaces and a default gateway providing external connectivity. When we run multiple VMs in a host, it is needed to provide connectivity between VMs within the host and across hosts. On an average, the number of VMs in a single host does not exceed 15-20. When running Containers in a host, the number of Containers in a single host can easily exceed 100. It is needed to have sophisticated mechanism to interconnect Containers. Broadly, there are two ways for Containers or VMs to communicate to each other. In Underlay network approach, VMs or Containers are directly exposed to host network. Bridge, macvlan and ipvlan network drivers are examples of this approach. In Overlay network approach, there is an additional level of encapsulation like VXLAN, NVGRE between the Container/VM network and the underlay network.
Linux Bridge
Linux Bridge acts like a regular hardware switch with learning and also supports protocols like STP for loop prevention. In linux bridge implementation, VMs or Containers will connect to bridge and bridge will connect to outside world. For external connectivity, we would need to use NAT. The following picture shows 2 Containers connected to a Linux bridge with ethx interface providing external connectivity.