In my previous blog, I covered basics of Network Virtualization and overview of commercial solutions available today. The most popular Open source Network Virtualization solution available today is Opendaylight. With the hydrogen release of ODL, ODL can be downloaded in 3 flavors: Base edition, Network Virtualization edition and Service provider edition. Network Virtualization edition adds components like Ovsdb, Opendove, VTN as well as Openstack integration. Ovsdb, Opendove and VTN are different approaches to provide multi-tenant networks. Ovsdb team has put in a nice set of hands-on tutorials for folks wanting to try out Openstack+ODL. I recently tried this out and I was very impressed. In this blog, I will cover my experiences in trying out hands-on Openstack+ODL integration.
ODL Virtualization edition:
Following picture is a block diagram of components of ODL Virtualization edition with Virtualization specific blocks highlighted in Yellow.
Following picture is a block diagram of blocks involved in Openstack+ODL integration.
Following are some points as I understood regarding the Openstack+ODL integration.
- Openstack’s ML2 plugin in Neutron module interacts with OVSDB Neutron application which in turn programs the Openvswitch using OVSDB for configuration and Openflow for data path.
- Openflow 1.0 or Openflow 1.3 are supported to configure Openvswitch. Openflow 1.3 supports multi-table capability and this optimizes the number of tunnels needed between Open vswitches.
- Network Virtualization is achieved using an Overlay Network created using VXLAN and GRE as the tunneling protocol between Open vswitches. VXLAN and GRE can be used at the same time.
- In the current implementation, the tunnels are proactively created and the mac entries are populated statically by the controller to open vswitches. The only optimization present is that the tunnels get created on the hosts only when there is atleast a single VM instantiation.
Trying out Openstack + ODL integration:
Used the following references:
- This video has the recording of the Openstack+ODL session presented at ODL summit 2014.
- Brent’s blog covers the steps in detail as well as there is a recording of an internal demo.
Summary of steps:
- Openstack+ODL integrated VM is available here. I used the Fedora 20 VM. This VM has both Openstack controller, ODL controller as well as devstack integrated. Download the OVA file from the link.
- Install the OVA file using Virtual box(can use any tool similar to Virtual box like VMFusion)
- Clone the VM so that there are 2 instances of the same VM. 1 VM will be used for Openstack controller, ODL controller and 1 instance of compute and another VM will be used specifically for compute instance. When Cloning the VM, make sure to use reinitialize mac option so that the new VM has different set of Mac address.
- In Virtual box network settings for the VM, use 2 host-only adapters. NAT networking option seems to be throwing issues.
- In VM1, changed the “RUN.sh” script to start ODL with Openflow 1.3 option.
- In VM1, copied the “local.conf.control” to “devstack” directory and changed the IP address to reflect VM1 eth1 ip address.
- In VM2, changed hostname to fedora-odl-2.
- In VM2, copied the “local.conf.compute” to “devstack” directory and changed the IP address to reflect VM2 eth1 ip address as well as VM1 controller ip address.
- Stack the Openstack in both VM1 and VM2.
- In VM2(compute instance), for some reason, the stacking did not work till I restart the openvswitch. This problem happened consistently. The workaround as explained in the reference above was to restart openvswitch manually. This can be done using “sudo systemctl restart openvswitch”
- Using Horizon Openstack interface, did the instantiation of 2 VMs in private network. 1 of the instances gets started in VM1 and another gets started in VM2.
- After the VM instantiation, saw that the overlay tunnels are created in both hosts using “sudo ovs-ofctl show”.
- From Horizon, logged into the VM console and I was able to ping between the VMs. This proves that the 2 VMs are able to talk across the Overlay network.
I have just provided a very high level description of the steps. The references above captures the steps in lot more detail. After getting the basic ping between VMs working, I also tried creating multiple private networks and instantiating VMs in the private network and saw how the data path gets mapped in Openvswitch using Openflow 1.3 dump commands.
Thanks to ODL ovsdb team for doing an amazing job in this overall project as well as for creating a VM that makes it so much easier for folks like me to try this out. Trying this hands-on gives a good feel of the overall concepts. Also, its great to see all this working with a single host:) (I have 12 GB RAM in my host machine, I saw that the RAM usage is close to 10GB with the above 2 VM instances.)