Docker Networking – Part2

This blog is part of my Docker series. This is a continuation from my previous blog on Docker Networking. In this blog, I will cover current Docker networking limitations, Pipework overview and a sample Docker application where I have linked containers in 2 different hosts.

There are 4 networking options that Docker provides when creating containers:

  1. –net=bridge. This is the default option that Docker provides where containers connect to the linux “docker” bridge.
  2. –net=host. In this option, there is no new network namespace created for the container and the container shares the same network namespace as host machine.
  3. –net=(container name or id). In this option, the new container shares the same network namespace as the specified container in the ‘net’ option. (Example:  “sudo docker run -ti –name=ubuntu2 –net=container:ubuntu1 ubuntu:14.04 /bin/bash”. Here, ubuntu2 container shares same network namespace as ubuntu1 container)
  4. –net=none. In this option, container does not get allocated a new network namespace. Only the loopback interface is created in this case. This option is useful in scenarios where we want to create our own networking options for the container.

Currently, Native Docker networking has the following limitations.

  • Cannot create more than 1 interface in the container.
  • Multi-host containers are difficult to create.
  • IP addressing scheme for the containers is not flexible.

There is lot of work ongoing in Docker Networking. Some of them are Weave, Flannel, Docknet, Pipework. There is also a native multi-host Docker networking proposal under discussion. In this blog, I will cover Pipework.

Pipework overview:

Pipework is a script developed by Jerome Petazonni to network Docker containers for complex environments. As mentioned by Jeremy himself, the script is a temporary solution till a more permanent solution gets developed natively in Docker. Pipework has the following features for containers.

  • Create any number of interfaces with arbitrary IP addresses.
  • Allows use of ovs bridge instead of Linux bridge.
  • Allows isolation of containers using vlans.
  • Allows configuration of IP, mac, netmask, gateway.

Pipework can be installed from here.

Sample application with containers between 2 hosts:

We will create an application like below. The goal here is for web containers to talk to db containers across another host and to provide isolation between the containers. This is established primarily through the pipework script.

docker_net1

  • Host 1 has 2 instances of web container. Web container has the postgres client application already installed.
  • Host 2 has 2 instances of db container.
  • Web container 1 is in vlan 10 and Web container 2 is in vlan 20. db client 1 is in vlan 10 and db client 2 is in vlan 20. This allows web container 1 to talk to db container 1 and web container 2 to talk to db container 2.
  • The 2 hosts have a GRE tunnel between them for host to host connectivity.

Pre-requisites:

  • I have 2 VMs with Ubuntu 14.04 as the 2 hosts on Virtualbox. I have Docker 1.4.1 installed in both hosts.
  • The 2 hosts have a host-only network interface that allows for connectivity across the 2 hosts.
  • Both the hosts have the following container images installed – smakam/apachedocker and training/postgres. Both of them are available in docker hub, the first one is the web container and the second is postgres container.
  • Install pipework in both hosts.
  • Openvswitch needs to be installed in both hosts. In my case, I have openvswitch 2.3.0 installed.

Execute following on host 1. This will create the 2 containers and create a GRE tunnel. The containers will be created with no networking option. The GRE tunnel endpoint must be the other end of the host-only ethernet interface in the host machine.

sudo docker run -d --net=none smakam/apachedocker
sudo docker run -d --net=none smakam/apachedocker
sudo ovs-vsctl add-port ovsbr0 gre0 -- set interface gre0 type=gre options:remote_ip=192.168.56.105

Note down the container IDs and execute the following. This will use the pipework script and create the interface from container to the ovs bridge with specified IP address and vlan.

sudo ~/pipework/pipework ovsbr0 <cid> 11.1.1.1/24 @10
sudo ~/pipework/pipework ovsbr0 <cid> 11.1.1.2/24 @20

Execute following on host 2. This will create the 2 containers and create a GRE tunnel.

sudo docker run -d --net=none training/postgres
sudo docker run -d --net=none training/postgres
sudo ovs-vsctl add-port ovsbr0 gre0 -- set interface gre0 type=gre options:remote_ip=192.168.56.102

Note down the container IDs and execute the following.

sudo ~/pipework/pipework ovsbr0 <cid> 11.1.1.3/24 @10
sudo ~/pipework/pipework ovsbr0 <cid> 11.1.1.4/24 @20

Following is the ovs bridge output in host 1. This shows the 2 veth interfaces corresponding to the 2 containers as well as the tunnel interface.

$ sudo ovs-vsctl show
91dc682f-5496-45ec-9113-7be0f5ecce56
    Bridge "ovsbr0"
        Port "gre0"
            Interface "gre0"
                type: gre
                options: {remote_ip="192.168.56.105"}
        Port "veth1pl2852"
            tag: 10
            Interface "veth1pl2852"
        Port "veth1pl2747"
            tag: 20
            Interface "veth1pl2747"
    ovs_version: "2.3.0"

Following is the ifconfig output in container 1 of host 1:

# ifconfig
eth1      Link encap:Ethernet  HWaddr a6:8d:d9:3d:b2:e2  
          inet addr:11.1.1.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::a48d:d9ff:fe3d:b2e2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:67 errors:0 dropped:0 overruns:0 frame:0
          TX packets:51 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:13666 (13.6 KB)  TX bytes:5796 (5.7 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:112 (112.0 B)  TX bytes:112 (112.0 B)

Following is the ifconfig output in container 2 of host 1:

# ifconfig
eth1      Link encap:Ethernet  HWaddr 8a:86:c5:c0:7b:a3  
          inet addr:11.1.1.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::8886:c5ff:fec0:7ba3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:69 errors:0 dropped:0 overruns:0 frame:0
          TX packets:50 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:13798 (13.7 KB)  TX bytes:5778 (5.7 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Container 1 on host 2 has ip address 11.1.1.3 and container 2 on host 2 has ip address of 11.1.1.4.

Test IP connectivity:

Verify that container 1 on host 1(web container) is able to ping only container 1 on host 2(db container). Similarly, verify that container 2 on host 1(web container) is able to ping only container 2 on hosts 2(db container).

Test application connectivity:

For this, I executed the following commands on host 1, container 1. This creates a simple table, inserts a row and does a query on the postgres database.

psql -h 11.1.1.3 -p 5432  -U docker -c "CREATE TABLE projects ( title TEXT NOT NULL, description TEXT NOT NULL)"
psql -h 11.1.1.3 -p 5432  -U docker -c "INSERT into projects VALUES ('first', 'sample')"
psql -h 11.1.1.3 -p 5432  -U docker -c "SELECT * from projects"

Following is the final output:

# psql -h 11.1.1.3 -p 5432  -U docker -c "SELECT * from projects"
 title | description 
-------+-------------
 first | sample
(1 row)

I did similar test on host 1, container 2.

References:

Advertisements

4 thoughts on “Docker Networking – Part2

    1. hi
      Following is 1 way/example to do it using pipework:
      sudo docker run -ti –net=none ubuntu /bin/bash
      sudo ~/pipework/pipework ovsbr0 11.1.1.1/24 @10
      sudo ~/pipework/pipework ovsbr0 12.1.1.1/24 @20

      As you might already know, pipework is more of a hack to workaround Docker networking issues. Native networking solutions are being developed in Docker by Socketplane, Weave and it should be available officially soon.

      Sreenivas

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s