Docker Networking – Socketplane

This blog is part of my ongoing series on Docker containers. In my previous blogs 1 and 2, I covered basics of Docker Networking and also covered some of the limitations with the current solutions. In the next few blogs, I will cover Docker Networking solutions from Socketplane, Weaveworks, CoreOS. Socketplane is developing a Docker networking solution that links containers across multiple hosts. They released a technology preview recently of their initial implementation. In this blog, I will cover some of my thoughts related to trying out the their initial solution.

Following are some internals of the implementation as I understood:

  • Openvswitch is used as the bridge where container interfaces are connected.
  • Multicast DNS is used to discover other cluster members. As of now, multicast is a requirement in Underlay network.
  • Consul is used as a service discovery mechanism where the key, data values for the whole cluster is stored. When socketplane agent is started, it prompts to choose the primary and secondary cluster members.
  • vxlan is used as a tunneling mechanism for data encapsulation between hosts.

Following is a picture of the data path between 2 containers on 2 different hosts.

docker_net3

To try out the demo, there are 2 options as explained here:

  1. Use the Vagrant script that will create the VMs which will be the host where the containers will be run from. The Vagrant script will also configure the VMs with socketplane
  2. Install socketplane manually into endhosts where containers will be run from.

I was able to try out the demo with both the options and it worked great..

Option 1:

I initially tried running the Vagrant script on an Ubuntu 14.04 VM which was installed in Virtualbox. I had issues starting the VM from Vagrant because of the nested virtualization issue explained here. Because of this, I ran Vagrant directly on my Windows machine. Following is my environment:

Windows 7 with Vagrant 1.7.2 and Virtualbox 4.3.20 and git.

There were 2 issues I faced which I also discussed with Socketplane team.

  1. Because git checks out the file in Windows and then these files get copied to Linux, the scripts have extra “^M” in the end. I did a workaround of executing “dos2unix” for the files in socketplane/scripts directory. There are 2 options to solve this as explained here.
  2. There is a bug that the latest socketplane image is not getting checked out. Workaround is to use the procedure explained here. By this time, most likely, this issue would have been already fixed and there is no need to use this workaround.

Option 2:

I used 2 hosts running Ubuntu 14.04 on Virtualbox where I installed Socketplane. Consul client can be installed from here.

In the below example, I have 2 Ubuntu hosts where I am running socketplane image and I will dump the relevant outputs to illustrate the data plane. The 2 Ubuntu hosts have connectivity over host-only adapters.
Following is the openvswitch output on host 1 before any containers are started. Default network is created with tag 1.

$ sudo ovs-vsctl show
91dc682f-5496-45ec-9113-7be0f5ecce56
    Manager "ptcp:6640"
        is_connected: true
    Bridge "docker0-ovs"
        Port default
            tag: 1
            Interface default
                type: internal
        Port "docker0-ovs"
            Interface "docker0-ovs"
                type: internal
    ovs_version: "2.3.0"

Lets create 1 container on host 1 and another container on host 2. In the below example, containers are created in the default network. Socketplane allows to specify the network on which the container needs to be created as an option. This option allows multi-tenant container networks.

$ sudo socketplane run -itd busybox

Following is the Openvswitch output in host 1 now:

$ sudo ovs-vsctl show
91dc682f-5496-45ec-9113-7be0f5ecce56
    Manager "ptcp:6640"
        is_connected: true
    Bridge "docker0-ovs"
        Port default
            tag: 1
            Interface default
                type: internal
        Port "ovsb71afa3"
            tag: 1
            Interface "ovsb71afa3"
                type: internal
        Port "docker0-ovs"
            Interface "docker0-ovs"
                type: internal
        Port "vxlan-192.168.56.102"
            Interface "vxlan-192.168.56.102"
                type: vxlan
                options: {remote_ip="192.168.56.102"}
    ovs_version: "2.3.0"

Following is the “ifconfig” output in the container 1 on host 1:

# ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

ovsb71afa3 Link encap:Ethernet  HWaddr 02:42:0A:01:00:01  
          inet addr:10.1.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING  MTU:1440  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:738 (738.0 B)

“ovsb71afa3” is the container port connected to Openvswitch in host 1. There is a vxlan tunnel from host 1 to host 2.

Following is the openvswitch output in host 2:

$ sudo ovs-vsctl show
91dc682f-5496-45ec-9113-7be0f5ecce56
    Manager "ptcp:6640"
        is_connected: true
    Bridge "docker0-ovs"
        Port "docker0-ovs"
            Interface "docker0-ovs"
                type: internal
        Port "vxlan-192.168.56.101"
            Interface "vxlan-192.168.56.101"
                type: vxlan
                options: {remote_ip="192.168.56.101"}
        Port "ovs5ab0f5d"
            tag: 1
            Interface "ovs5ab0f5d"
                type: internal
    ovs_version: "2.3.0"

Following is the “ifconfig” output in the container 2, host 2:

# ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

ovs5ab0f5d Link encap:Ethernet  HWaddr 02:42:0A:01:00:02  
          inet addr:10.1.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING  MTU:1440  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:738 (738.0 B)

“ovs5ab0f5d” is the container port connected to Openvswitch in host 2. There is a vxlan tunnel from host 2 to host 1.

At this point, we will be able to ping the 2 containers.

Thanks to Socketplane team for the phenomenal work. The github page also has 2 live demos from Socketplane.

References:

4 thoughts on “Docker Networking – Socketplane

  1. Hi Sreenivas,

    Nice article. Keep it up. I have one quick question to ask you. Have you heard / seen API capabilities from Socketplane. I am looking for integrating docker containers with OVS. But containers will be managed via the Shipyard. I intend to add functionality into Shipyard provided the tools like pipeworks / socketplane has API capabilities.

    Wondering if you came across any tools that integrate with OVS and also provides API’s to do the same.

    Thanks and Regards,

    ~ Vijay
    srinivasan_vijay@yahoo.com

    1. hi Vijay
      Pipework is more of a script, there is no api as far as I know(https://sreeninet.wordpress.com/2015/01/01/docker-networking-part2/).
      With Socketplane, I havent seen api. Considering that they have wrapped out containers using Socketplane CLI, I assume that there is a socketplane api underneath which calls Docker api. Will be good to check this with Socketplane team.
      Based on looking at Docker networking proposals, I see that Networking will be a plugin in the long run so native Docker apis should be good enough and plugin below might manage networking.
      I was not familiar with Shipyard. Took a quick look now, seems like it does Docker cluster management. In the current form, does the containers within the cluster across hosts not talk to each other? Wondering how networking is implemented currently in Shipyard? Will try to play more with Shipyard sometime soon…

      Regards
      Sreenivas

    1. hi Prakash
      If you look at the proposal(https://github.com/docker/docker/issues/8951), I think they want to get away with port forwarding. There are 2 scenarios where you would want port forwarding. First is connecting to containers from outside world and another for containers to talk to each other. In first case, they plan to do a 1:1 NAT or exposing container ip. In second case, networking part would get hidden with containers mentioning what sort of connectivity they want.
      Will be good to check this with Socketplane team. You can post the query in (https://github.com/socketplane/socketplane).

      Sreenivas

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s