Network Virtualization – Openstack+ODL integration

In my previous blog, I covered basics of Network Virtualization and overview of commercial solutions available today. The most popular Open source Network Virtualization solution available today is Opendaylight. With the hydrogen release of ODL, ODL can be downloaded in 3 flavors: Base edition, Network Virtualization edition and Service provider edition. Network Virtualization edition adds components like Ovsdb, Opendove, VTN as well as Openstack integration. Ovsdb, Opendove and VTN are different approaches to provide multi-tenant networks. Ovsdb team has put in a nice set of hands-on tutorials for folks wanting to try out Openstack+ODL. I recently tried this out and I was very impressed. In this blog, I will cover my experiences in trying out hands-on Openstack+ODL integration.

ODL Virtualization edition:

Following picture is a block diagram of components of ODL Virtualization edition with Virtualization specific blocks highlighted in Yellow.


Following picture is a block diagram of blocks involved in Openstack+ODL integration.


Following are some points as I understood regarding the Openstack+ODL integration.

  • Openstack’s ML2 plugin in Neutron module interacts with OVSDB Neutron application which in turn programs the Openvswitch using OVSDB for configuration and Openflow for data path.
  • Openflow 1.0 or Openflow 1.3 are supported to configure Openvswitch. Openflow 1.3 supports multi-table capability and this optimizes the number of tunnels needed between Open vswitches.
  • Network Virtualization is achieved using an Overlay Network created using VXLAN and GRE as the tunneling protocol between Open vswitches. VXLAN and GRE can be used at the same time.
  • In the current implementation, the tunnels are proactively created and the mac entries are populated statically by the controller to open vswitches. The only optimization present is that the tunnels get created on the hosts only when there is atleast a single VM instantiation.

Trying out Openstack + ODL integration:

Used the following references:

  •  This video has the recording of the Openstack+ODL session presented at ODL summit 2014.
  • Brent’s blog covers the steps in detail as well as there is a recording of an internal demo.

Summary of steps:

  1. Openstack+ODL integrated VM is available here. I used the Fedora 20 VM. This VM has both Openstack controller, ODL controller as well as devstack integrated. Download the OVA file from the link.
  2. Install the OVA file using Virtual box(can use any tool similar to Virtual box like VMFusion)
  3. Clone the VM so that there are 2 instances of the same VM. 1 VM will be used for Openstack controller, ODL controller and 1 instance of compute and another VM will be used specifically for compute instance. When Cloning the VM, make sure to use reinitialize mac option so that the new VM has different set of Mac address.
  4. In Virtual box network settings for the VM, use 2 host-only adapters. NAT networking option seems to be throwing issues.
  5. In VM1, changed the “” script to start ODL with Openflow 1.3 option.
  6. In VM1, copied the “local.conf.control” to “devstack” directory and changed the IP address to reflect VM1 eth1 ip address.
  7. In VM2, changed hostname to fedora-odl-2.
  8. In VM2, copied the “local.conf.compute” to “devstack” directory and changed the IP address to reflect VM2 eth1 ip address as well as VM1 controller ip address.
  9. Stack the Openstack in both VM1 and VM2.
  10. In VM2(compute instance), for some reason, the stacking did not work till I restart the openvswitch. This problem happened consistently. The workaround as explained in the reference above was to restart openvswitch manually. This can be done using “sudo systemctl restart openvswitch”
  11. Using Horizon Openstack interface, did the instantiation of 2 VMs in private network. 1 of the instances gets started in VM1 and another gets started in VM2.
  12. After the VM instantiation, saw that the overlay tunnels are created in both hosts using “sudo ovs-ofctl show”.
  13. From Horizon, logged into the VM console and I was able to ping between the VMs. This proves that the 2 VMs are able to talk across the Overlay network.

I have just provided a very high level description of the steps. The references above captures the steps in lot more detail. After getting the basic ping between VMs working, I also tried creating multiple private networks and instantiating VMs in the private network and saw how the data path gets mapped in Openvswitch using Openflow 1.3 dump commands.

Thanks to ODL ovsdb team for doing an amazing job in this overall project as well as for creating a VM that makes it so much easier for folks like me to try this out. Trying this hands-on gives a good feel of the overall concepts. Also, its great to see all this working with a single host:) (I have 12 GB RAM in my host machine, I saw that the RAM usage is close to 10GB with the above 2 VM instances.)


6 thoughts on “Network Virtualization – Openstack+ODL integration

  1. automatic creation of overlay networks can be done by existing Neutron device drivers on ML2 plugin right. What is the need of SDN in the architecture.

  2. hi Ram
    I assume your question is what is the need for Opendaylight here since Openstack with ML2 plugin can program Networks. Following are the reasons that I can think of:
    Opendaylight abstracts the network well and has a good knowledge of underlay networks in addition to separating control and data plane. Please refer following demo from Opendaylight project( that shows how the Underlay visibility can help making intelligent decisions.
    Opendaylight’s SAL layer has the different agent plugins that makes it easy to talk to heterogenous devices. This makes ML2 layer very light and it does not need any device level knowledge, that part is offloaded to Opendaylight.

    I think both models(Openstack without Opendaylight and Openstack with Opendaylight) can co-exist. It is also possible that any other SDN controller can be used instead of Opendaylight.


    1. So OpenDayLight or Ryu plugin on Neutron is enough if we have openflow physical devices in datacenter. If the network devices support netconf which is also supported in ODL, there is no need of driver for that network device on Neutron. So we can make use odl SDN controller to create virtual networks in data center with openflow switches and we don’t need drivers of those switches to be on Neutron.

    1. But we should configure bridges before going to create tenant networks. Also we should configure trunks between physical servers and switches also switches. Physical devices does not need to support vxlan or gre tunneling also since every communication between hops is done using flow table entries.

      Firewall and load balancing also can be moved to SDN controller. Just abstractions should be provided by SDN controller.

      What about virtual router support. Any ways every thing is controlled by flow entries. There is no need for physical routers to create virtual routers between virtual subnets.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s