Openflow history and some hands-on

In this blog, I will cover the evolution of Openflow(from 1.0-1.4). Also, I will cover the hands-on stuff that I tried with Openflow.

Openflow is a protocol by which we can control a Switch’s data plane with the Switch exposing flow tables as mentioned in Openflow protocol. Flow tables provide an abstraction and It is not necessary to expose internals of the switch.

Openflow 1.0(Dec 2009):


  • Controller talks to the Openflow switch using Openflow protocol using a TLS secure channel.
  • Flow table consists of match fields to match packet headers, counters to count flow hits, actions to define the result.
  • There are 3 message types, first from controller to switch, second is asynchronous message from switch to controller and third is symmetric communication between controller and switch.

Openflow 1.1(Feb 2011):


Important additions to Openflow 1.1:

  • Most of the hardware switches have different tables for handling different functionalities like l2 processing, l3 processing, qos etc and have a pipeline connecting different tables. Openflow 1.1 introduces multi table support and a way to link the results of the individual tables.
  • To support functionality like multicast, multipath, grouping feature is introduced where groups can be created with a set of actions and packets can be classified into groups.
  • Support added for vlan and mpls.
  • Virtual port support(ex: LAG) added.

Multi table support:


Openflow 1.2(Dec 2011):

Important additions in Openflow 1.2:

  • Flexible match criteria with bitmasking and user defined match fields.
  • Flexible packet rewrite support.
  • ipv6 support.
  • Multiple controller support.

Openflow 1.3(June 2012):

Important additions in Openflow 1.3:

  • Flexible framework to express switch capability.
  • Flexible table miss support. Table miss entry is default entry when none of the entries in a table match.
  • ipv6 extension header handling.
  • Per flow meter.
  • Per connection event filtering to optimize events between multiple controllers and switch.
  • PBB support.

Openflow 1.4(Oct 2013):

Important additions in Openflow 1.4:

  • Make protocol extensible by using TLV format in all locations.
  • Additional reasons for packets reaching controller.
  • Optical port property setting.
  • Improve controller<->switch interaction by adding per flow monitoring and communication of controller role change to switch.
  • Gracefully handle flow table full scenario.

Trying out Openflow:

Easiest way to experiment with Openflow is using Mininet, dpctl. In an earlier post, I talked about Mininet, Mininet allows us to create virtual network which can be programmed using Openflow for datapath and ovsdb for configuration. dpctl is a command line tool for configuring and monitoring openflow datapaths.

Installing Mininet:

git clone git://
mininet/util/ -n3fxwv

This will install mininet with openflow 1.3, nox openflow controller and also install openvswitch.

Create following sample topology:

sudo mn --topo single,2 --mac --switch user --pre ~/mininet/pre/

Contents of

py "Configuring network"
h1 ifconfig h1-eth0
h2 ifconfig h2-eth0

Topology would look like this:


I created hosts in different subnet to illustrate some of the Openflow functionalities in the switch/router.

Create following route and arp entries in the hosts to allow hosts to send traffic across subnets.

route add -net netmask gw h1-eth0
arp -s 00:00:00:00:00:03

route add -net netmask gw h2-eth0
arp -s 00:00:00:00:00:04

With no openflow entries added in the switch, try pinging, host1 to host2 in Mininet and it will not work because switch does not know how to forward.

Add following openflow entry using dpctl.

sudo dpctl tcp: flow-mod table=0,cmd=add in_port=1,eth_type=0x800,ip_dst= apply:set_field=eth_src:0:0:0:0:0:4,set_field=eth_dst:0:0:0:0:0:2,output=2
sudo dpctl tcp: flow-mod table=0,cmd=add in_port=2,eth_type=0x800,ip_dst= apply:set_field=eth_src:0:0:0:0:0:3,set_field=eth_dst:0:0:0:0:0:1,output=1

First flow matches on ip packet with destination ip address and sets the source and destination mac address field along with setting output port. Second flow does the similar operation matching on ip address With the above 2 flows added, host1 should be able to ping host2 successfully.

If hosts are in same subnet, following flows are good enough:

sudo dpctl  tcp: flow-mod table=0,cmd=add in_port=1 apply:output=2
sudo dpctl  tcp: flow-mod table=0,cmd=add in_port=2 apply:output=1

In the above scenario, switch is just doing bridging functionality and it is enough to just set the correct output port.

To see flow statistics, do:

sudo dpctl  tcp: stats-flow table=0

Above command displays the table 0 statistics. Output would look this:

stat_req{type="flow", flags="0x0", table="0", oport="any", ogrp="any", cookie=0x0", mask=0x0", match=oxm{all match}}

stat_repl{type="flow", flags="0x0", stats=[{table="0", match="oxm{in_port="1", eth_type="0x800", ipv4_dst="", ipv4_dst_mask=""}", dur_s="277", dur_ns="826000", prio="32768", idle_to="0", hard_to="0", cookie="0x0", pkt_cnt="1", byte_cnt="98", insts=[apply{acts=[set_field{field:eth_src="00:00:00:00:00:04"}, set_field{field:eth_dst="00:00:00:00:00:02"}, out{port="2"}]}]}, {table="0", match="oxm{in_port="2", eth_type="0x800", ipv4_dst="", ipv4_dst_mask=""}", dur_s="312", dur_ns="568000", prio="32768", idle_to="0", hard_to="0", cookie="0x0", pkt_cnt="1", byte_cnt="98", insts=[apply{acts=[set_field{field:eth_src="00:00:00:00:00:03"}, set_field{field:eth_dst="00:00:00:00:00:01"}, out{port="1"}]}]}]}

Table 0 statistics shows the packet, byte counter as well as the age of the flow.

I also tried some of the other complex Openflow functionalities like groupings and multiple table support. I had some issues in getting these to work. I have put the commands below and the issues that I saw. Hopefully, someone can correct my mistakes….

Using Group:

I wanted to use group functionality to have different L3 routes point to the same nexthop entry. I used something like this:

sudo dpctl  tcp: group-mod cmd=add,group=1,type=all apply:set_field=eth_dst:0:0:0:0:0:2,output=2
sudo dpctl  tcp: flow-mod table=0,cmd=add eth_type=0x800,ip_dst= apply:group=1
sudo dpctl  tcp: group-mod cmd=add,group=2,type=all apply:set_field=eth_dst:0:0:0:0:0:1,output=1
sudo dpctl  tcp: flow-mod table=0,cmd=add eth_type=0x800,ip_dst= apply:group=2

Group1 here would correspond to nexthop1 and Group2 would correspond to nexthop2. Unfortunately, the ping did not work. I observed the following issues.

  • Group statistics never incremented.(I used stats-group to see group stats). Ping did not work because it looks like packets are not directed to the group.
  • When I add 2 entries to a single table pointing to different groups, second entry seems to overwrite first entry… I observed this when I dumped flow table.

Using multiple tables:

Openflow 1.1 introduced multi-table support which is closer to how hardware is normally implemented. In typical packet-processing asics, there is a ingress stage, forwarding stage and egress stage. I tried to do this with 3 linked tables in Openflow.

I tried following script:

sudo dpctl tcp: meter-mod cmd=add,flags=1,meter=1 drop:rate=10000
sudo dpctl  tcp: flow-mod table=0,cmd=add in_port=1 meter:1 goto:1
sudo dpctl tcp: flow-mod table=0,cmd=add,prio=0 goto:1
sudo dpctl  tcp: flow-mod table=1,cmd=add in_port=1,eth_type=0x800,ip_dst= apply:set_field=eth_src:0:0:0:0:0:4,set_field=eth_dst:0:0:0:0:0:2,output=2 goto:2
sudo dpctl  tcp: flow-mod table=1,cmd=add in_port=2,eth_type=0x800,ip_dst= apply:set_field=eth_src:0:0:0:0:0:3,set_field=eth_dst:0:0:0:0:0:1,output=1
sudo dpctl  tcp: flow-mod table=2,cmd=add eth_type=0x800,eth_dst=0:0:0:0:0:2 apply:set_field=ip_dscp:5
  • Table 0 is used for ingress stage(h1->h2), table 1 for forwarding stage, table 2 for egress.
  • First flow entry in table 0 is a meter entry, second is a default table-miss entry that points to table 1.
  • First entry in table 1 is for forwarding stage(h1->h2), second entry in table 1 is for forwarding stage(h2->h1).
  • Table 2 is used for egress stage where dscp is overwritten based on the L2 destination.

Following were the issues I faced:

  • I was not able to add Table-miss entry. Openflow 1.3 spec states “The flow entry that wildcards all fields and has priority equal 0 is called the table-miss entry. The table-miss flow entry species how to process packets unmatched by other flow entries in the table”. When I add table-miss entry in table 0 to forward packet to table 1, I get error while addition.
  • For packet flow from host1->host2, actions specified in table 2 was not taking effect. For example, as part of table 1’s action, I modified mac address and specified output port. As part of table 2’s action, I modified dscp. I did not see modifications in dscp when I captured the packet in Wireshark. I tried few other modifications as well, I saw similar behavior where actions of both tables together were not taking into effect.

I also found documentation for dpctl very minimal. The options are not explained clearly. It would have been good if the man page included more practical examples. For some cases, I referred to the source code to get more details, following header file is useful.


One thought on “Openflow history and some hands-on

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s