Network Virtualization – Overview and popular commercial solutions

In this blog, I will cover the following:

  • Overview and need for Network Virtualization
  • Relation between Cloud management and Network Virtualization
  • Network Virtualization approaches
  • Overlay Networks
  • Service chaining
  • Popular Network Virtualization solutions – Vmware NSX, Juniper Opencontrail, Cisco ACI.

Overview and need for Network Virtualization

Network Virtualization allows for abstracting the Networking resources into a logical/software model so that the same set of physical resources can be shared by multiple tenants in a secure and isolated manner. There are 2 kinds of Networking resources, one is the physical resources like Router, Switch and another is appliances like Firewall, Load balancer etc. The appliance can be either physical or virtual. NFV(Network functions virtualization) can be considered a subset of Network Virtualization.

Network Virtualization is similar to Server Virtualization. In server virtualization, compute resources are virtualized, while in network virtualization, networking resources are virtualized. Network virtualization took much longer time to come to the market compared to server virtualization. Following picture gives a good comparison between the 2.


Need for Network Virtualization

  • Multi-tenancy – Cloud solutions allow for the same physical infrastructure to be used by multiple tenants. Network virtualization simplifies multi-tenancy.
  • Mobility of workloads – Workloads or VMs should be independent of the physical host on which its located and should be movable between the hosts based on the need. This allows VMs across 2 different L3 networks look like they are in the same L2 domain.
  • Speed of provisioning – Network provisioning has been a very painful process configuring a set of disparate physical and virtual devices. The logical model of Network virtualization allows for ease of provisioning and the Network complexity is hidden from the user.
  • Scale – Vlan has been a limiting factor both from scale perspective(4096 vlan limit) as well as broadcast handling. There is a need for a solution that scales.

Network Virtualization – Use cases and relation to Cloud management

Network Virtualization is used typically in private and public data centers to create IaaS(Infrastructure as a service). The concepts of Network virtualization can also be extended to Enterprise and Service provider networks. Cloud management solutions involve managing compute, storage and network. The model that is emerging now is that the networking piece of cloud management is typically done by SDN network controllers(like Vmware NSX, Opencontrail controller, Cisco APIC) and the cloud management software talks to the network controller for network provisioning. Following picture illustrates the role of Cloud orchestrator and its relation to SDN controller.


Network Virtualization Approaches

Broadly, there are 2 approaches used currently for Network virtualization.

  1. Overlay Networks – In this approach, L2 packet from VM is encapsulated into an L3 tunnel. The encapsulation can either be done in the Vswitch or for baremetal scenarios, encapsulation is done in physical gateways. Majority of vendors are adopting the Overlay approach.
  2. Integrated fabric – In this approach, centralized controller programs the Virtual and physical switches to create logical switches and routers. Typically,  Openflow is used to program the fabric. Very few vendors are using this approach. Examples are Big Switch Networks’s Big Cloud fabric, NEC’s Programmable network fabric.

Overlay Networks

Overlay networks are a critical part of Network virtualization. Overlay networks are logical networks created on top of Underlay networks. Underlay networks typically consist of physical switches connected to each other using some L3 protocol. Overlay networks use some kind of tunneling protocol to create connectivity between 2 VMs on top of physical underlay network. Popular data-center tunneling protocols are VXLAN, STT, NVGRE.


VXLAN is a common tunneling protocol in the data center space. Following is the packet format for VXLAN.


The original ethernet header gets encapsulated into VXLAN header by VXLAN gateway. VXLAN gateway can either be a virtual gateway located in  virtual switch or can be a physical gateway like a switch/router. VXLAN id is 24 bits wide, so it allows for 2^24 logical networks. VXLAN gateway maintains mapping between the VM mac address to gateway mapping. There are different ways to derive this mapping, common approaches being L3 multicast, static provisioning, VM discovery and programming using centralized controller etc.

Following picture shows how VMs in 2 different hosts of 2 different networks communicate over tunneled network. Red VMs use red VXLAN tunnel and green VMs use green VXLAN tunnel.


VXLAN can be originated either in virtual switch or in physical gateway. Following picture shows both the scenarios.


 Service chaining

There are various network services like Firewall, Load balancing, NAT, Encryption, Wan optimization. These are either done by dedicated physical applicances or by Virtual appliances. NFV aims towards virtualizing networking services into virtual appliances. Service chaining is the term used to connect workloads through a chain of these network services.

Following picture shows 2 VMs connected through a service VM which does some Network service.


Following picture shows a service chaining use case for an application hosted in the data center. Stateful firewall and application delivery controller are the network services here.


Popular Network Virtualization solutions

Following are some popular Network Virtualization solutions

  • Vmware NSX
  • Juniper Opencontrail
  • Cisco ACI

The goal here is to not compare the solutions but just give an overview of how each solution work. Opendaylight(ODL) is an Open source Network virtualization solution, I have not covered ODL in this article.

Vmware NSX

Vmware NSX is a platform for Network Virtualization. Following picture shows where NSX platform fits in the overall Cloud architecture. NSX platform sits between the Cloud management platform and hypervisor. The cloud management solution could be Vmware Vcloud director, Openstack, Cloudstack etc.


NSX product comes in 2 flavors:

  1. NSX for Multi-hypervisor environment.
  2. NSX for Vsphere environments. This is a combination of NVP and Vsphere vshield.

Obviously, there is better integration of NSX for Vsphere compared to Multi-hypervisor.

Following picture highlights the data plane, control plane and management plane of the NSX solution.


Following are more details on the components of NSX solution:

Physical switches:

Physical switches provide the underlay fabric for IP transport. It is preferred to have a leaf, spine topology. These switches are not from Vmware.

Physical and Virtual appliances:

The appliances provide services like NAT, Load balancing, Firewall, VPN etc. These appliances can be tied together with Service chaining. The appliances can be run in the host itself, this avoids unnecessary traffic in the data center.

Soft switches:

The soft switches are either Open vswitch in the case of multi-hypervisor solution and NSX vswitch in the case of Vsphere solution. ovsdb is used to talk to Open vswitch. VXLAN tunnels originate from Soft switches.

Controller cluster:

Controller cluster has a northbound api to the cloud management platform and a southbound api to the different physical and virtual devices. Controller cluster is highly redundant.

Service nodes:

Service nodes help in offloading host/hypervisor with flooding/replication.


L2 gateway is used to talk to physical servers. L3 gateway is used to limit the broadcast domain. Gateways can be physical or virtual appliance. Third party switches can be used as L2 gateways.

NSX manager:

NSX manager is the management piece that is used to talk to the controller and devices.

Following picture highlights the integration of Vcloud director with NSX.


To integrate with Openstack, Neutron plugin of Openstack is used to talk to NSX.

Juniper Opencontrail

Juniper’s Opencontrail solution is a Network virtualization platform that is available both as Opensource and commercial version. Opencontrail solution is modelled on Juniper’s L2 and L3 vpn technologies. BGP is used in the control plane and GRE is the preferred overlay method.

Following picture gives an highlevel overview of the components of Juniper’s solution.


Following components comprise Juniper’s solution

Physical switches:

Physical switches provide the underlay fabric for IP transport. It is preferred to have a leaf, spine topology.


VRouter is similar to Open vswitch. It can be do both L2 and L3 overlays. Both GRE and VXLAN are supported overlay methods. VRouter talks to controller using XMPP. VRouter also provides NFV functionalities like Firewall, Load balancer etc.


Controller is logically centralized and physically distributed. It has the following components:

  • Configuration node – This exposes the northbound REST api for the cloud management platform. The configuration node has a high level data model and a low level data model. The high level data model represents constructs like Virtual network, firewall, policy etc. The low level data model represent device specific constructs like vlan, routing tables etc. There is a SDN compiler that converts the high level data model into low level data model.
  • Control node – Control node talks to various devices like Vrouter, gateway nodes. XMPP is used to talk to VRouter and BGP,Netconf is used to talk to gateway nodes.
  • Analytics node – Analytics node is used for monitoring and troubleshooting.

Following picture shows the Controller details.



L2 gateway is used to talk to physical servers. L3 gateway is used to limit the broadcast domain. Gateways can be physical or virtual appliance.

Following picture shows how Opencontrail solution integrates with Openstack using Contrail Neutron plugin.


 Cisco ACI

Cisco ACI(Application centric infrastructure) uses a different model for Network Virtualization compared to the previous 2 solutions. Considering that Data centers are driven by applications, the networking needs are modeled in ACI using a high level abstraction model. The abstraction model represents the different application tiers and the policies that govern the interactions between the application tiers(In 1 of my previous blogs, I have described how cloud applications are developed). Cisco APIC translates the high level abstraction model into low level device context which the user does not need to be aware of.

Following picture provides overview of ACI components and interactions.


Following picture shows how an application is modeled.


EPG(End point group) describes the Application tier endpoint. Policies are defined between 2 EPG. Collection of EPG and the policy is called as Application Network profile. Application Network profile is given as input to APIC and APIC translates this to low level device level context that gets programmed into ACI fabric.

Following picture shows service chaining in ACI. Between 2 application tiers, multiple services can be specified.


Following are different components of ACI solution.

ACI fabric

ACI fabric is composed of Cisco 9500 series switches in Leaf, Spine topology. ACI fabric gives advantages like application level visibility, flow based load balancing, advanced analytics. APIC talks to the ACI fabric. Application health or score is achieved using application level visibility and advanced analytics. VXLAN overlay is used in ACI fabric.

AVS(Application virtual switch)

AVS resides in the hypervisor and it is used for virtual switching. AVS is a different module compared to Nexus1000v. APIC talks to AVS to program it.

APIC(Application policy Infrastructure controller)

APIC has a northbound interface to Cloud management solution and southbound interface to ACI fabric and AVS. APIC is responsible for translating Application Network profile into device specific contexts.


2 thoughts on “Network Virtualization – Overview and popular commercial solutions

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s