Macvlan and ipvlan in CoreOS

This is a continuation of my previous blog on macvlan and ipvlan Linux network drivers. In this blog, I will cover usage of macvlan and ipvlan network plugins with CoreOS Rkt Container runtime and CNI(Container network interface).

Rkt and CNI

Rkt is another Container runtime similar to Docker. CNI is Container networking standard proposed by CoreOS and few other companies. CNI exposes standard APIs that network plugins needs to implement. CNI supports plugins like ptp, bridge, macvlan, ipvlan and flannel. IPAM can be managed by a second level plugin that CNI plugin calls.

Pre-requisites

We can either use multi-node CoreOS cluster or a single node CoreOS for the macvlan example used in this blog. I have created three CoreOS cluster using Vagrant. Following is the cloud-config user-data that I used.

macvlan and ipvlan config

Following is the relevant section of Cloud-config for macvlan:

- path: "/etc/rkt/net.d/20-lannet.conf"
    permissions: "0644"
    owner: "root"
    content: |
      {
        "name": "lannet",
        "type": "macvlan",
        "master": "eth0",
        "ipam": {
          "type": "host-local",
          "subnet": "20.1.1.0/24"
        }
      }

In the above cloud-config, we specify the properties of macvlan plugin that includes the parent interface over which macvlan will reside. We use IPAM type as “host-local” here, this means IP address will be assigned from within the range “20.1.1.0/24” as specified in the configuration. The macvlan type defaults to “bridge”.

Following is the relevant section of cloud-config for ipvlan:

 - path: "/etc/rkt/net.d/30-ipnet.conf"
    permissions: "0644"
    owner: "root"
    content: |
      {
        "name": "ipnet",
        "type": "ipvlan",
        "master": "eth1",
        "ipam": {
          "type": "host-local",
          "subnet": "30.1.1.0/24"
        }
      }

The ipvlan type defaults to “l2”. We have used IP address range as “30.1.1.0/24” for ipvlan.

Hands-on Example

In this example, we will create a three node CoreOS cluster, create two Rkt Containers with macvlan and ipvlan network plugins and illustrate connectivity between the two Containers.

CoreOS node detail:

Following output shows the CoreOS and Rkt version used here:

$ cat /etc/os-release
NAME=CoreOS
ID=coreos
VERSION=1053.2.0
$ rkt version
rkt Version: 1.6.0
appc Version: 0.8.1

Following output shows the three node CoreOS cluster:

$ etcdctl member list
2a7b42f3fbe15ff9: name=c878f3c8bdd24be7bc6b0ad277a19b1e peerURLs=http://172.17.8.101:2380
 clientURLs=http://172.17.8.101:2379 isLeader=false
4d126ada92aea640: name=83465381dc5f4556a63c050254e58b0c peerURLs=http://172.17.8.102:2380
 clientURLs=http://172.17.8.102:2379 isLeader=true
e683a87a42c93335: name=afdf9ce5bcff41dab714d9365c1fe6a7 peerURLs=http://172.17.8.103:2380
 clientURLs=http://172.17.8.103:2379 isLeader=false

Following command creates two Rkt Containers in one of the CoreOS node with each Container part of macvlan, ipvlan and flannel network. I have specified multiple networks just to illustrate the fact that one Container can be part of multiple networks.

sudo rkt run --interactive --net=lannet,ipnet,flannelnet --hostname=macvlan1 --insecure-options=image docker://busybox
sudo rkt run --interactive --net=lannet,ipnet,flannelnet --hostname=macvlan2 --insecure-options=image docker://busybox

flannelnet will use “10-flannelnet.conf”, lannet will use “20-lannet.conf”, ipnet will use “30-ipnet.conf”.

Following output shows the Container getting created with multiple networks:

$ sudo rkt run --interactive --net=lannet,ipnet,flannelnet --hostname=macv
lan1 --insecure-options=image docker://busybox
image: using image from file /usr/lib64/rkt/stage1-images/stage1-coreos.aci
image: using image from local store for url docker://busybox
networking: loading networks from /etc/rkt/net.d
networking: loading network flannelnet with type flannel
networking: loading network lannet with type macvlan
networking: loading network ipnet with type ipvlan
networking: loading network default-restricted with type ptp

Following output shows the truncated ifconfig. eth0 belongs to flannel, eth1 belongs to macvlan, eth2 belongs to ipvlan, eth3 belongs to ptp. “ptp” is the default network in CNI.

# ifconfig
eth0      Link encap:Ethernet  HWaddr C6:20:D7:F5:18:43
          inet addr:10.1.60.4  Bcast:0.0.0.0  Mask:255.255.255.0

eth1      Link encap:Ethernet  HWaddr EA:71:0C:BA:68:5F
          inet addr:20.1.1.4  Bcast:0.0.0.0  Mask:255.255.255.0

eth2      Link encap:Ethernet  HWaddr 08:00:27:A2:BF:F9
          inet addr:30.1.1.4  Bcast:0.0.0.0  Mask:255.255.255.0

eth3      Link encap:Ethernet  HWaddr E6:AC:E9:72:57:23
          inet addr:172.16.28.4  Bcast:0.0.0.0  Mask:255.255.255.0

Following output shows the inter-Container reachability using all four networks:

# ping -c1 10.1.60.5
PING 10.1.60.5 (10.1.60.5): 56 data bytes
64 bytes from 10.1.60.5: seq=0 ttl=64 time=0.148 ms

--- 10.1.60.5 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.148/0.148/0.148 ms
/ # ping -c1 20.1.1.5
PING 20.1.1.5 (20.1.1.5): 56 data bytes
64 bytes from 20.1.1.5: seq=0 ttl=64 time=0.277 ms

--- 20.1.1.5 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.277/0.277/0.277 ms
/ # ping -c1 30.1.1.5
PING 30.1.1.5 (30.1.1.5): 56 data bytes
64 bytes from 30.1.1.5: seq=0 ttl=64 time=1.104 ms

--- 30.1.1.5 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.104/1.104/1.104 ms
/ # ping -c1 172.16.28.5
PING 172.16.28.5 (172.16.28.5): 56 data bytes
64 bytes from 172.16.28.5: seq=0 ttl=63 time=0.229 ms

--- 172.16.28.5 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.229/0.229/0.229 ms

I had issues getting Container connectivity working across hosts using macvlan and ipvlan drivers.

References

Advertisements

One thought on “Macvlan and ipvlan in CoreOS

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s