Docker macvlan and ipvlan network plugins

This is a continuation of my previous blog on macvlan and ipvlan Linux network drivers. Docker has added support for macvlan and ipvlan drivers and its currently in experimental mode as of Docker release 1.11.

Example used in this blog

In this example, we will use Docker macvlan and ipvlan network plugins for Container communication across hosts. To illustrate macvlan and ipvlan concepts and usage, I have created the following example.

vlan4

Following are details of the setup:

  • First, we need to create two Docker hosts with experimental Docker installed. The experimental Docker has support for macvlan and ipvlan. To create experimental boot2docker image, please use the procedure here.
  • I have used Virtualbox based environment. Macvlan network is created on top of host-only network adapter in Virtualbox. It is needed to enable promiscuous mode on the Virtualbox adapter. This allows for Container communication across hosts.
  • There are four Containers in each host. Two Containers are in vlan70 network and two other Containers are in vlan80 network.
  • We will use both macvlan and ipvlan drivers and illustrate Container network connectivity in same host and across hosts.

Following output shows the Docker experimental version running:

$ docker --version
Docker version 1.11.0-dev, build 6c2f438, experimental

Macvlan

In this section, we will illustrate macvlan based connectivity with macvlan bridge mode.

On host 1, create macvlan subinterface and Containers:

docker network  create  -d macvlan \
   --subnet=192.168.0.0/16 \
    --ip-range=192.168.2.0/24 \
	-o macvlan_mode=bridge \
    -o parent=eth2.70 macvlan70
docker run --net=macvlan70 -it --name macvlan70_1 --rm alpine /bin/sh
docker run --net=macvlan70 -it --name macvlan70_2 --rm alpine /bin/sh

docker network  create  -d macvlan \
   --subnet=192.169.0.0/16 \
    --ip-range=192.169.2.0/24 \
	-o macvlan_mode=bridge \
    -o parent=eth2.80 macvlan80
docker run --net=macvlan80 -it --name macvlan80_1 --rm alpine /bin/sh
docker run --net=macvlan80 -it --name macvlan80_2 --rm alpine /bin/sh

Containers in host 1 will get ip address in 192.168.2.0/24 network and 192.169.2.0/24 network based on the options mentioned above.

On host 2, create macvlan subinterface and Containers:

docker network  create  -d macvlan \
   --subnet=192.168.0.0/16 \
    --ip-range=192.168.3.0/24 \
	-o macvlan_mode=bridge \
    -o parent=eth2.70 macvlan70
docker run --net=macvlan70 -it --name macvlan70_3 --rm alpine /bin/sh
docker run --net=macvlan70 -it --name macvlan70_4 --rm alpine /bin/sh

docker network  create  -d macvlan \
   --subnet=192.169.0.0/16 \
    --ip-range=192.169.3.0/24 \
	-o macvlan_mode=bridge \
    -o parent=eth2.80 macvlan80
docker run --net=macvlan80 -it --name macvlan80_3 --rm alpine /bin/sh
docker run --net=macvlan80 -it --name macvlan80_4 --rm alpine /bin/sh

Containers in host 2 will get ip address in 192.168.3.0/24 network and 192.169.3.0/24 network based on the options mentioned above.

Lets look at Docker networks created in host 1, we can see the macvlan networks “macvlan70” and “macvlan80” as shown below.

$ docker network ls
NETWORK ID          NAME                DRIVER
e5f5f6add03d        bridge              bridge
a1b89ce4bd84        host                host
90b7d5ba61b9        macvlan70           macvlan
bedeca9839e1        macvlan80           macvlan

Lets check connectivity on ip subnet 192.168.x.x/16(vlan70) between Containers in same host and across hosts:

Here, we are inside macvlan70_1 Container in host1:
# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:02:01
          inet addr:192.168.2.1  Bcast:0.0.0.0  Mask:255.255.0.0
# ping -c1 192.168.2.2
PING 192.168.2.2 (192.168.2.2): 56 data bytes
64 bytes from 192.168.2.2: seq=0 ttl=64 time=0.137 ms

--- 192.168.2.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.137/0.137/0.137 ms
/ # ping -c1 192.168.3.1
PING 192.168.3.1 (192.168.3.1): 56 data bytes
64 bytes from 192.168.3.1: seq=0 ttl=64 time=2.596 ms

--- 192.168.3.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 2.596/2.596/2.596 ms
/ # ping -c1 192.168.3.2
PING 192.168.3.2 (192.168.3.2): 56 data bytes
64 bytes from 192.168.3.2: seq=0 ttl=64 time=1.400 ms

--- 192.168.3.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.400/1.400/1.400 ms

Connectivity is also successful in ip subnet 192.169.x.x/16(vlan80) between Containers in same host and across hosts.

Connecting macvlan container to host

By default containers in macvlan network cannot directly talk to host and this is intentional. It is needed to create a macvlan interface in the host to allow the communication between host and container. Also, Containers can expose tcp/udp ports using macvlan network and it can directly be accessed from underlay network.

Lets use an example to illustrate host to container connectivity in macvlan network.

Create a macvlan interface on host sub-interface:

docker network create -d macvlan \
–subnet=192.168.0.0/16 \
–ip-range=192.168.2.0/24 \
-o macvlan_mode=bridge \
-o parent=eth2.70 macvlan70

Create container on that macvlan interface:

docker run -d –net=macvlan70 –name nginx nginx

Find ip address of Container:

docker inspect nginx | grep IPAddress
“SecondaryIPAddresses”: null,
“IPAddress”: “”,
“IPAddress”: “192.168.2.1”,

At this point, we cannot ping container IP “192.168.2.1” from host machine.

Now, let’s create macvlan interface in host with address “192.168.2.10” in same network.

sudo ip link add mymacvlan70 link eth2.70 type macvlan mode bridge
sudo ip addr add 192.168.2.10/24 dev mymacvlan70
sudo ifconfig mymacvlan70 up

Now, we should be able to ping the Container IP as well as access “nginx” container from host machine.

$ ping -c1 192.168.2.1
PING 192.168.2.1 (192.168.2.1): 56 data bytes
64 bytes from 192.168.2.1: seq=0 ttl=64 time=0.112 ms

— 192.168.2.1 ping statistics —
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.112/0.112/0.112 ms

ipvlan

In this section, we will illustrate ipvlan based connectivity with ipvlan l2 mode.

On host 1, create ipvlan sub-interface and Containers.

docker network  create  -d ipvlan \
   --subnet=192.168.0.0/16 \
    --ip-range=192.168.2.0/24 \
	-o ipvlan_mode=l2 \
    -o parent=eth2.70 ipvlan70
docker run --net=ipvlan70 -it --name ipvlan70_1 --rm alpine /bin/sh
eth0      Link encap:Ethernet  HWaddr 08:00:27:FA:9D:0C
          inet addr:192.168.2.1  Bcast:0.0.0.0  Mask:255.255.0.0
docker run --net=ipvlan70 -it --name ipvlan70_2 --rm alpine /bin/sh

docker network  create  -d ipvlan \
   --subnet=192.169.0.0/16 \
    --ip-range=192.169.2.0/24 \
	-o ipvlan_mode=l2 \
    -o parent=eth2.80 ipvlan80
docker run --net=ipvlan80 -it --name ipvlan80_1 --rm alpine /bin/sh
docker run --net=ipvlan80 -it --name ipvlan80_2 --rm alpine /bin/sh

On host 2, create ipvlan sub-interface and Containers.

docker network  create  -d ipvlan \
   --subnet=192.168.0.0/16 \
    --ip-range=192.168.3.0/24 \
	-o ipvlan_mode=l2 \
    -o parent=eth2.70 ipvlan70
docker run --net=ipvlan70 -it --name ipvlan70_3 --rm alpine /bin/sh
docker run --net=ipvlan70 -it --name ipvlan70_4 --rm alpine /bin/sh	  
		  
docker network  create  -d ipvlan \
   --subnet=192.169.0.0/16 \
    --ip-range=192.169.3.0/24 \
	-o ipvlan_mode=l2 \
    -o parent=eth2.80 ipvlan80
docker run --net=ipvlan80 -it --name ipvlan80_3 --rm alpine /bin/sh
docker run --net=ipvlan80 -it --name ipvlan80_4 --rm alpine /bin/sh

Let’s look at networks created in host 1, we can see ipvlan networks “ipvlan70” and “ipvlan80” as shown below.

$ docker network ls
NETWORK ID          NAME                DRIVER
e5f5f6add03d        bridge              bridge
a1b89ce4bd84        host                host
1a1262e008d3        ipvlan70            ipvlan
080b230b892e        ipvlan80            ipvlan

Connectivity is successful in ip subnet 192.168.x.x/16(vlan70) and ip subnet 192.169.x.x/16(vlan80) between Containers in same host and across hosts.

ipvlan l3 mode

There are some issues in getting ipvlan l3 mode to work across hosts. Following example shows setting up ipvlan l3 mode across Containers in a single host:

docker network  create  -d ipvlan \
   --subnet=192.168.2.0/24 \
   --subnet=192.169.2.0/24 \
	-o ipvlan_mode=l3 \
    -o parent=eth2.80 ipvlan
docker run --net=ipvlan --ip=192.168.2.10 -it --name ipvlan_1 --rm alpine /bin/sh
docker run --net=ipvlan --ip=192.169.2.10 -it --name ipvlan_2 --rm alpine /bin/sh

For a quick reference on Docker macvlan driver, you can refer my video here:

References

30 thoughts on “Docker macvlan and ipvlan network plugins

  1. Nice post, thank you for sharing!

    I’ve played with macvlan driver recently and it looks really interesting. It looks like macvlan driver doesn’t allow you to expose TCP/UDP ports out of the container, so container could be acceptable from the host. Is it true or I’ve missed something?

    1. Hi Vladimir
      Containers can expose tcp/udp ports usign macvlan network and it can directly be accessed from underlay network. Host machine cannot directly access Containers in macvlan network. To allow host machine to have the access, it is needed to put the host in the same macvlan parent interface.

      Example:
      Create macvlan interface:
      docker network create -d macvlan \
      –subnet=192.168.0.0/16 \
      –ip-range=192.168.2.0/24 \
      -o macvlan_mode=bridge \
      -o parent=eth2.70 macvlan70

      Create container in that interface:
      docker run -d –net=macvlan70 –name nginx nginx

      Find ip address of Container:
      docker inspect nginx | grep IPAddress
      “SecondaryIPAddresses”: null,
      “IPAddress”: “”,
      “IPAddress”: “192.168.2.1”,

      At this point, I cannot ping “192.168.2.1” from host machine.

      Lets create macvlan interface in host in same network:
      sudo ip link add mymacvlan70 link eth2.70 type macvlan mode bridge
      sudo ip addr add 192.168.2.10/24 dev mymacvlan70
      sudo ifconfig mymacvlan70 up

      Now, we should be able to ping the Container IP as well as access nginx from host machine.
      $ ping -c1 192.168.2.1
      PING 192.168.2.1 (192.168.2.1): 56 data bytes
      64 bytes from 192.168.2.1: seq=0 ttl=64 time=0.112 ms

      — 192.168.2.1 ping statistics —
      1 packets transmitted, 1 packets received, 0% packet loss
      round-trip min/avg/max = 0.112/0.112/0.112 ms

      Regards
      Sreenivas

    2. Even i doubt the same cause i’m able to ping the ip from both host and other devices on the same network. But when coming to send the logstash output to the ip using a port no data is being receive by the container.

    1. Hi Suraj
      As we discussed, I have posted the answer here if it helps anyone:
      There is no co-ordination of ip address like in overlay network with macvlan unless dhcp is used. If you look at the example above in the blog, I have used –ip-range option while creating the network in both hosts so that the containers in different host get ip addresses in different range that belongs to the same subnet. (192.168.2.x and 192.168.3.x in 192.168/16 subnet) After this is done, containers can talk to each other. The other options are using static ip address or using a dhcp server that is available in the underlay network.

      Regards
      Sreenivas

  2. Hi Sreenivas,
    Excellent blog. I am using Hyper-V to run Ubuntu 1604 server on a windows 10 box. I also have a physical machine with Ubuntu 16.04. Using these two Ubuntu instances (one physical and one virtual) I have created containers in different address ranges as described by you in the above blog. I have one issue that I am trying to resolve – I am not able to ping my containers running on the virtual Ubuntu from anywhere in the network. I am able to ping the containers running on the physical Ubuntu box from the windows box as well as the virtual Ubuntu machine.

    I am using the latest docker (1.12).

    Any insights will be appreciated.

    Thanks,
    GK

    1. hi GK
      This is your stack on the virtual machine
      Windows->hyperv->ubuntu->containers.

      To access containers in this case from outside, hyperv networking needs to be setup appropriately.

      this would be more simpler:
      If you have multiple ubuntu OS on top of hyperv, you can setup VM based networking for the containers across 2 different ubuntu virtualhost to talk to each other.

      regards
      Sreenivas

      1. Hi Sreenivas,
        I tried to use two Ubuntu VMs on top of HyperV. I still have the same problem. I can ping the VM1 from VM2 and vice versa but I cannot ping the containers running on one VM from the other. I also not able to ping containers running on one VM from inside a container running on the other VM.

        I am suspecting HyperV and I am not able configure it the right way.

        Thanks,
        GK

      2. hi GK
        Did you put the 2 hosts into the same Docker cluster? The 2 hosts needs to be in a swarm cluster and the 2 containers needs to be in same overlay network for containers to talk to each other. Have you made such a setup?

        Regards
        Sreenivas

      3. Hi Sreenivas
        I am not using Swarm cluster. My set up is very simple. I may eventually end up using Swarm when we go into production but for now here is what I am doing and I believe this should work – I am creating a docker macvlan network on each Ubuntu VM and running several docker containers. I am using different IP address ranges on different docker hosts (i.e. Ubuntu VMs).

        I believe the problem I ran into is an issue with HyperV and has nothing to do with docker or macvlan driver. I say this because I tried the same architecture on Ubuntu Virtualbox VMs and everything works as expected. There is one difference I noticed between my set up on HyperV and VirtualBox – that is I was able to enable promiscuous mode on the virtual NIC on the VMs in VirtualBox settings. There is no way to do that in HyperV settings. I did try to enable promisc mode via command line inside the UbuntuVM on HyperV but that did not make any difference. I read somewhere that HyperV will not work in this mode.

        We were thinking of using Windows server as a host operating and create a bunch of Ubuntu VMs in HyperV. But now it looks like because of this limitation we will not be using Windows. I am leaning towards using Ubuntu and VirtualBox.

        We need to have macvlan because our client software is an SNMP Manager and relies on containers having different IPs and MacAddresses for them to be discovered as different devices. We are using this to simulate several thousand snmp agents on the network.

        I am able to scale really easily, quickly and reliably with Docker.

        Thanks,
        GK

      4. hi GK
        ok, now I understand it a little better.
        Yes, promiscuous mode setting is required.
        In your application, does snmp manager run as a container? I understand that you want different ip and mac address for end devices which you can achieve through containers without macvlan. I assume that your need to use macvlan is because snmp manager does not run as a container and you want your end devices to be in same network with different ip and mac address?

        Regards
        Sreenivas

  3. Hi Sreenivas,
    The snmp manager is a Windows application. As of now it cannot run in a container. Yes we need the manager and containers all on the same network. For our application to discover and recognize each container as a separate device (we are simulating 1000s), each container needs to have a separate IP and MacAddress. As a side, because we are generating so much traffic I want to be on an electrically separated private network.

    I thought you had to have macvlan (I am not talking about docker and the docker macvlan driver) to be able to have multiple virtual interfaces each with different IP and MacAddress.
    Prior to using Docker I was doing all this in a python script (e.g. generating a fake MacAddress and creating an interface of type macvlan and assigning it an IP address) and then running my simulator on that network interface. That all worked but a pain to maintain and scale. I was using a dhcp server to dole out IP addresses for each of the macvlan interfaces I created manually. With docker and docker’s macvlan network driver all that is handled for me.

    Thanks,
    GK

  4. Hi Sreenivas, I installed docker on my linux server
    docker –version
    Docker version 1.13.1, build 092cba3
    I have one interface eth1
    eth1 Link encap:Ethernet HWaddr 08:00:27:0c:9e:b9
    inet addr:10.0.1.17 Bcast:10.0.1.255 Mask:255.255.255.0

    I’ve created docker network, command: docker network create -d macvlan –subnet=10.0.1.0/24 –gateway=10.0.1.1 -o parent=eth1 pub_net
    without mistakes.
    I’ve created docker container with commadn: docker run –net=pub_net –ip=10.0.1.107 -itd alpine /bin/sh
    I came in this container docker attach 296d5991baa7
    And ping don’t work
    / # ping 10.0.1.1
    PING 10.0.1.1 (10.0.1.1): 56 data bytes
    ^C
    — 10.0.1.1 ping statistics —
    5 packets transmitted, 0 packets received, 100% packet loss
    Can you help me?

    1. Hi Yaroslav
      Are you able to ping between 2 containrs in the same macvlan network? I assume that works.

      I had a Linux in Virtualbox environment and I had similar issue where I was able to ping between containers but not able to ping gateway. I solved it by using adapter type “pcnet3” and using promiscuous mode setting. With macvlan bridge mode, you will not be able to ping host ip from container. just wanted to mention it.

      Regards
      Sreenivas

  5. Hi Sreenivas
    Thanks so much, It works))))) I checked ping between 2 containrs and ping worked.
    Yes, I too have a Linux in Virtualbox environment. I used adapter type “pcnet3” and used promiscuous mode and all that works. Thanks

  6. i am trying to get access to a server in the host network.
    however, the only connection i get is to the IP of the host macvlan interface.
    What am i missing here?
    Do i need extra packet forwarding to get packets from the macvlan interface to the network interface of the host?

    1. I found the remark 🙂
      With macvlan bridge mode, you will not be able to ping host ip from container. just wanted to mention it.
      So I need another bridge to make this work.

  7. I’m not having any luck with these steps. I did:
    “`
    sudo ip link add mac0 link eno2 type macvlan mode bridge
    sudo ip addr add 192.168.2.10/24 dev mac0
    sudo ifconfig mac0 up
    “`

    But the host still cannot ping the container. I am able to ping 192.168.2.10 from a separate host so that IP is working. I’m thinking there must be a kernel setting or something like that in my way.

  8. Hi Sreenivas,
    Great tutorial, however, as you commented: “I had a Linux in Virtualbox environment and I had similar issue where I was able to ping between containers but not able to ping gateway. I solved it by using adapter type “pcnet3” and using promiscuous mode setting.”
    -> This is *essential* info, and should not be just “hidden” in a comment (I did not read through all of the comments), *please update the tutorial with this essential information* (you cannot expect your readers to read through all the comments), thank you. I wasted a *lot* of time because of this missing information. Finally I found it here: https://github.com/jpetazzo/pipework This pipework script is also very great to make managing things like this easier.

  9. Not part of the tutorial, but also very useful: how to create the boot2docker VM-s (host1, host2) in the first place… Something like this should help:

    docker-machine create –driver virtualbox \
    –virtualbox-hostonly-cidr “192.168.2.1/24” \
    –virtualbox-hostonly-nicpromisc “allow-all” \
    –virtualbox-hostonly-nictype “Am79C973” \
    host1

  10. Hello Sreeniva,
    i have created a macvlan network and a second macvlan interface on the host. and run the nginx container
    docker run -d –net=my_net –name docker-nginx1 -p :80:80 nginx

    now, if i browse the host ip address i cannot see nginx webpage
    if i browse the container ip address i suceed to see the nginx web page

    i can ping the container ip form host

  11. Note that you must have the bridge interfaces in the UP state on the docker host.
    i.e. ifconfig eth1 up on the docker host and then try pinging from inside your container.

  12. Hi Sreenivas,
    I have a new requirement where I have to provide the Mac-Address to my containers instead of relying on Docker (Macvlan driver) to generate one. I am using the latest docker-ce version and I am using command like below to run docker –
    “docker run –mac-address=”02:42:c0:a8:84:22″ –net=my_macvlan_network …”
    I am on unbuntu 16.04 LTS.

    However, docker seems to ignore the –mac-address option and is still assigning its own MAC Address. Any suggestions on the issue?

    Thanks,
    GK

  13. Hi Sreenivas,
    I am able to ping to container which is using macvlan network from the host machine. But from container i am not able to ping the host.

    Thanks,
    Dipti

  14. Thanks for the article, it is information I actually need right now!

    I did notice something though. Maybe I don’t completely understand your example’s use case, but it struck me that 192.169.x.x/16 is not in a private ip address range (rfc1918 sets aside 192.168.x.x/16 as private). Your network 1 is fine tho. Maybe you’d be better off with 192.168.0.0/17 as network 1 and 192.168.128.0/17 as network 2? One wouldn’t want to commit a “party foul” by assigning yourself public IPs, although the likelihood of having a practical issue is pretty small, especially if this setup is entirely for use within a single server!

Leave a comment