Group based policy in Opendaylight

This is a continuation of my previous blog on Group based policy(GBP). In this blog, I will cover the GBP features in Opendaylight helium release, Use-cases that are published in the Opendaylight wiki as well as different usecase that I tried out.

Group based policy in Opendaylight:

Following diagram is from Opendaylight GBP wiki:

gbp3

  • Openstack here is the orchestration layer and it communicates the policy to ODL through Neutron apis.
  • The policy is expressed in high level data language and is translated and programmed into hardware through renderers.
  • In ODL helium release, the only available renderer is Openflow and it uses ovsdb overlay design.
  • Opflex renderer is being developed currently and it is also based on ovsdb overlay. When Opflex is available, there will be a Opflex agent in the openvswitch that will eventually do the low level translation and programming.

Following are some key terms as far as policy is concerned:

  • End point – can be any device like VM, interface etc.
  • End point group(EPG) – collection of end points that share same policy.
  • Contract – Contracts are between EPGs and they define how EPGs should communicate to each other.
  • Clause, Subject – Specifies details of the Contract.

Use cases:

Following are 3 examples/use cases that are published in Opendaylight wiki.

  1. The example here shows 8 end points classified into 2 EPGs and split across 2 different hosts. End points are simulated as Mininet hosts.
  2. The example here is the same as above, the main difference is that the hosts are simulated as docker containers.
  3. The example here shows 12 end points classified into 3 EPGs and split across 2 different hosts. End points are simulated as docker containers.

I have tried out examples 1 and 3 above and it worked great. Lets walk through the example 3 in greater detail to illustrate the different concepts:

Example 3 achieves the following:

  • “client1” and “client2” EPG cannot ping each other.
  • “client1” and “client2” EPG can ping “webserver” EPG.
  • “client1” and “client2” can access webserver on “webserver” EPG, but “webserver” EPG cannot access any ports including web server ports in “client” EPG.
  • The endpoints associated with EPG is split between 2 hosts, this illustrates that the policy is applied irrespective of where the endpoint is present.

There are 3 EPGs called “client1”, “client2”, “webserver” as described below. “client1” and “client2” consumes the contract “pingall+web” that “webserver” EPG provides.

endpointGroups = [
                   {'name':'client1',
                    'providesContracts' : [], #List of contract names provided
                    'consumesContracts' : ['pingall+web'],
                    'id' : '1eaf9a67-a171-42a8-9282-71cf702f61dd',
                    },
                   {'name':'client2',
                    'providesContracts' : [], #List of contract names provided
                    'consumesContracts' : ['pingall+web'],
                    'id' : '5e6c787c-156a-49ed-8546-547bdccf283c',
                    },
                  {'name':'webserver',
                    'providesContracts' : ['pingall+web'], #List of contract names provided
                    'consumesContracts' : [],
                    'id' : 'e593f05d-96be-47ad-acd5-ba81465680d5',
                   }
                  ]

Contract “pingall+web” is defined like below. Contract “pingall+web” has a clause “allow-http-clause” and the clause “allow-http-clause” has 2 subjects “allow-icmp-subject” and “allow-icmp-subject”.

[
             {'name':'pingall+web',
              'id':'22282cca-9a13-4d0c-a67e-a933ebb0b0ae',
              'subject': [
                {'name': 'allow-http-subject',
                 'rule': [
                    {'name': 'allow-http-rule',
                     'classifier-ref': [
                        {'name': 'http-dest',
                         'direction': 'in'},
                        {'name': 'http-src',
                         'direction': 'out'}
                          ]
                     }
                          ]
                 },
                {'name': 'allow-icmp-subject',
                 'rule': [
                    {'name': 'allow-icmp-rule',
                     'classifier-ref': [
                        {'name': 'icmp'}
                                                  ]}
                          ]
                 }],
              'clause': [
                {'name': 'allow-http-clause',
                 'subject-refs': [
                    'allow-http-subject',
                    'allow-icmp-subject'
                    ]
                 }
                        ]
              }]

Till now, no networking specifics have been mentioned. The classifiers under the subject adds the networking specifics, in this case, it specifies port numbers.

{'classifier-instance':
                [
                {'name': 'http-dest',
                'classifier-definition-id': '4250ab32-e8b8-445a-aebb-e1bd2cdd291f',
                'parameter-value': [
                    {'name': 'type',
                     'string-value': 'TCP'},
                    {'name': 'destport',
                     'int-value': '80'}
                ]},
                {'name': 'http-src',
                'classifier-definition-id': '4250ab32-e8b8-445a-aebb-e1bd2cdd291f',
                'parameter-value': [
                    {'name': 'type',
                     'string-value': 'TCP'},
                    {'name': 'sourceport',
                     'int-value': '80'}
                ]},
                {'name': 'icmp',
                'classifier-definition-id': '79c6fdb2-1e1a-4832-af57-c65baf5c2335',
                'parameter-value': [
                    {'name': 'proto',
                     'int-value': '1'}
                                    ]
                 }
                 ]
             }

Following are some notes when I tried the above examples:

  • There is no need to compile Opendaylight source code checked for grouppolicy as mentioned in the use case, binary from Opendaylight helium SR-1 works fine for the above examples.
  • I used docker 1.4.1 version and Openvswitch 2.3.0 and it works fine with this. The examples in the wiki were tried with docker version 1.0.1.

Web Client, Database Use case:

  • Using the above example, I extended the use case to below. I replaced the clients with an apache webserver/dbclient container and the webserver with a postgres database server container. From policy perspective, it still remains the same. All endpoints will be able to ping each other except between the 2 client EPG, dbclient will be able to access database in postgres container, webserver in client EPG will be accessible within EPG and not outside EPG.
  • The respective config files are in github. The 2 containers used are “smakam/apachedocker” and “smakam/postgresdocker” and they are in docker hub. “apachedocker” has running apache webserver, it also has curl and psql client installed. postgresdocker has postgres db server running and it also has curl installed.
  • I also modified “infrastructure_launch.py” to not execute “/bin/bash” as part of container creation, modified file is in the same github location. This allows me to start the container as a daemon with the services running in the background.
  • I used docker 1.4.1 which allows me to use “docker exec” to demonstrate the use case by using the psql client. “docker exec” was not available in older docker versions.

After executing the start_poc script, following are the endpoints created in host1:

# docker ps
CONTAINER ID        IMAGE                          COMMAND                CREATED             STATUS              PORTS               NAMES
b38669eb7123        smakam/apachedocker:latest     "/usr/sbin/apache2ct   4 seconds ago       Up 3 seconds                            h37_3               
5ca81a1a956c        smakam/apachedocker:latest     "/usr/sbin/apache2ct   4 seconds ago       Up 4 seconds                            h37_2               
f1a57aa50794        smakam/postgresdocker:latest   "su postgres -c '/us   5 seconds ago       Up 4 seconds                            h36_3               
bd618d7d126c        smakam/postgresdocker:latest   "su postgres -c '/us   5 seconds ago       Up 4 seconds                            h36_2               
9302f5513128        smakam/apachedocker:latest     "/usr/sbin/apache2ct   6 seconds ago       Up 5 seconds                            h35_3               
ae01a9295a0d        smakam/apachedocker:latest     "/usr/sbin/apache2ct   6 seconds ago       Up 5 seconds                            h35_2               

Following are the endpoints created in host 2:

# docker ps
CONTAINER ID        IMAGE                          COMMAND                CREATED             STATUS              PORTS               NAMES
7a4df92cba88        smakam/apachedocker:latest     "/usr/sbin/apache2ct   7 seconds ago       Up 7 seconds                            h37_5               
ed4c0cfd1a94        smakam/apachedocker:latest     "/usr/sbin/apache2ct   7 seconds ago       Up 7 seconds                            h37_4               
df0e0b16be73        smakam/postgresdocker:latest   "su postgres -c '/us   8 seconds ago       Up 7 seconds                            h36_5               
18a41631ffff        smakam/postgresdocker:latest   "su postgres -c '/us   8 seconds ago       Up 8 seconds                            h36_4               
131387bb155d        smakam/apachedocker:latest     "/usr/sbin/apache2ct   9 seconds ago       Up 8 seconds                            h35_5               
9da6dcb3c8f9        smakam/apachedocker:latest     "/usr/sbin/apache2ct   9 seconds ago       Up 9 seconds                            h35_4               

We should be able to ping to any of the hosts except across the 2 client EPG.

Lets try to access the db server from 1 of the clients and it should work fine.

root@h35_2:/# psql -h 10.0.36.4 -p 5432  -U docker -c "CREATE TABLE projects ( title TEXT NOT NULL, description TEXT NOT NULL)"
CREATE TABLE
root@h35_2:/# psql -h 10.0.36.4 -p 5432  -U docker -c "INSERT into projects VALUES ('first', 'sample')"
INSERT 0 1
root@h35_2:/# psql -h 10.0.36.4 -p 5432  -U docker -c "SELECT * from projects"
 title | description 
-------+-------------
 first | sample
(1 row)

Access to webserver will be restricted within the EPG. Hosts in “client1” EPG will be able to access the webserver within “client1” EPG. “client2” and “dbserver” EPG will not be able to access the webserver in “client1” EPG.

Access webserver within “client1” EPG works fine:

root@h35_2:/# curl 10.0.35.3
  <!--
.
.

Access webserver from “dbserver” EPG to “client1” EPG fails:

root@h36_2:/# curl http://10.0.35.3 --connect-timeout 3
curl: (28) Connection timed out after 3001 milliseconds

Access webserver from “client2” EPG to “client1” EPG fails:

root@h37_2:/# curl http://10.0.35.3 --connect-timeout 3
curl: (28) Connection timed out after 3001 milliseconds

References:

2 thoughts on “Group based policy in Opendaylight

  1. Hi Sreenivas,
    Thanks for the informative articles. I’m trying to set up ODL with OVS agent to learn and experiment with Group Based Policies ans OVS agent. I have ODL installed on one VM and OVS in another VM. Installed opflex and agent in the VM where OVS is installed as mentioned in https://wiki.opendaylight.org/view/OpFlex:Building_and_Running . I assume I should install opflex on the controller side as well (since this will be the south bound api that will support GBP) and that would connect to OVS agent. Is Opflex required on OVS side as well ? Can you help me in this or point to a link which has related info ?
    Thanks,
    ajay

Leave a comment