Cloud · IceHouse · Network Management · Neutron · Open source · Open vSwitch · OpenStack · Virtualization

L2 connectivity in OpenStack using OpenvSwitch mechanism driver

L2 connectivity is the most basic requirement in a network. All cloud platforms allow users to create subnets. Subnets are L2 segments to which the servers attach their interfaces to and start sending and receiving traffic. Servers on the same L2 segment can reach each other directly. They only need to resolve the destination MAC address using ARP. In the world of networking this service is provided by your access switch.

Some history on switching

Switches basically provide the following functionality

  1. Mac learning: As switch receive packets on their interface they map the interface id to the list of mac addresses of the packets received on that interface. This is used later while forwarding.
  2. Forwarding: Switches do not see a packet past the l2 headers. The have to perform a simple logic before sending out a packet received on one interface, to other interfaces.
    1. If the destination is broadcast/multi-cast, forward on all ports except the ingress port.
    2. If the destination of a packet is mapped to any interface send the packet out that interface alone.
    3. If the destination is non of the above, forward on all ports except the ingress port.
  3. Vlan tagging/untagging: Packets are assorted according to their vlan. The rules are simple.
    1. Packets appearing on trunk ports should be tagged unless they belong to vlan 1(native vlan). The tag identifies the packet in this case.
    2. Packets appearing on access ports should not be tagged unless they want to be dropped. The port configuration identifies the packet in this case

    The assorted packets then pass through the forwarding phase, which determine to which port they would be sent to. Packets going out trunk ports will be tagged and those going out access ports will not be tagged.

The switches of course do much more, but I believe the above listed three points would suffice any beginner.

The default factory setting on any switch would configure all ports as trunk(allows all vlan that exist on the switch) with native vlan ‘1’. Visualize this as every machine connected to such a switch would belong to the segment ‘vlan 1’. However this is not always the case. The network administrator would configure bunches of interfaces as access port for ‘vlan x’. This means that they pass only traffic belonging to ‘vlan x’ and all machines connected to any of this bunch of interface belong to segment ‘vlan x’.

So what constitutes a single L2 segment?
All interfaces of a switch that are configured to be access ports for a particular vlan constitute an L2 segment.

Can an L2 segment extend beyond a switch?
Yes Switches can be cascaded by connecting their trunk ports with a cable.

L2 Connectivity in OpenStack

Before you continue to read this section I would recommend understanding what ‘physical networks‘ are in OpenStack.
In any cloud system the hypervisor that run the virtual machine are distributed across many hosts. Instances that belong to the same L2 segment/subnet may be hosted on two different machines. It becomes the responsibility of the cloud system to emulate as if the Instances are on same segment. In OpenStack L2 connectivity is provided by ‘mechanism drivers’. How exactly they do would be specific to each mechanism driver. We shall discuss the ‘openvswitch’ mechanism driver. This driver can be selected in ‘/etc/neutron/plugins/ml2/ml2_conf.ini’.


It relies on OpenvSwitch for forwarding. The driver alone is responsible for

  1. Creating the interfaces on the switch, for the virtual machine to bind to
  2. Configuring the interfaces(access/trunk vlan association)
  3. Extending the l2 segment beyond one machine

In OpenStack it is the responsibility of the compute node to host virtual machines. The neutron-openvswitch-agent emulate as though the virtual machines on multiple compute hosts are on the same L2 segment. In OpenStack all virtual machines belonging to all tenants and network attach to a bridge which according to OpenStack is the ‘Integration bridge’. Normally it would be named ‘br-int’. What happens further depends on what ‘type driver’ you use. In this post we will discus the vlan type driver.

vlan type driver

The subnet in OpenStack is an entity of a network. Although a Network in OpenStack may contain more than one subnet, It is customary to have one subnet per network. Every network in OpenStack has an associated ‘segmentation id’ and ‘Physical Network’. In my case my subnet belongs to a network which has been allocated physical network ‘Intnet1’ and segmentation id 100.

mysql -u neutron -pneutron_pass -h controller_node neutron -e 'select * from ml2_network_segments'
| id                                   | network_id                           | network_type | physical_network | segmentation_id |
| 1d6bbd72-083c-49cc-a610-ed02efbced57 | 690ee9ae-841a-4d71-bd2c-466f138f1f5a | vlan         | Intnet1          |             100 |
3 rows in set (0.00 sec)

When an Instance attaches to a port on ‘br-int’, the port is configured by neutron-openvswitch-agent to be an access port for say ‘vlan x’. In the below excerpt you can see two ports ‘qvod0dc2793-91’ and ‘qvofac4c814-89’ with ‘tag: 1’.

    Bridge br-int
        Port "qvod0dc2793-91"
            tag: 1
            Interface "qvod0dc2793-91"
        Port br-int
            Interface br-int
                type: internal
        Port "qvofac4c814-89"
            tag: 1
            Interface "qvofac4c814-89"
        Port "int-br-eth1"
            Interface "int-br-eth1"
    ovs_version: "2.0.1"

Every physical network is mapped to a bridge in ‘/etc/neutron/plugins/ml2/ml2_conf.ini’. In my case ‘Intnet1’ has been mapped to ‘br-eth1’ with vlan range from 100 to 200.


The agent connects the physical network’s associated bridge and intergration bridge using a veth pair. You can see it happenning if you have enabled debug in ‘/etc/neutron/neutron.conf’

Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'link', 'add', 'int-br-eth1', 'type', 'veth', 'peer', 'name', 'phy-br-eth1']
Exit code: 0
Stdout: ''
Stderr: '' execute /usr/lib/python2.7/dist-packages/neutron/agent/linux/
2014-06-18 13:34:02.778 12736 DEBUG neutron.agent.linux.utils [req-39cae2d4-542d-4228-9881-8e848a0321f4 None] Running command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', '--', '--may-exist', 'add-port', 'br-int', 'int-br-eth1'] create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/
2014-06-18 13:34:02.979 12736 DEBUG neutron.agent.linux.utils [req-39cae2d4-542d-4228-9881-8e848a0321f4 None] 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', '--', '--may-exist', 'add-port', 'br-int', 'int-br-eth1']
Exit code: 0
Stderr: '' execute /usr/lib/python2.7/dist-packages/neutron/agent/linux/
2014-06-18 13:34:03.087 12736 DEBUG neutron.agent.linux.utils [req-39cae2d4-542d-4228-9881-8e848a0321f4 None] Running command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', '--', '--may-exist', 'add-port', 'br-eth1', 'phy-br-eth1'] create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/
2014-06-18 13:34:03.198 12736 DEBUG neutron.agent.linux.utils [req-39cae2d4-542d-4228-9881-8e848a0321f4 None] 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', '--', '--may-exist', 'add-port', 'br-eth1', 'phy-br-eth1']

The agent finally configures flow rules on br-eth1 in such a way that all packets with a vlan id matching to that of the instances’ port should be translated to the segmentation id of the Instances’ network, which in my case is from ‘vlan 1’ to ‘vlan 100’.

root@pluto:~# ovs-ofctl dump-flows br-eth1
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=519.377s, table=0, n_packets=3, n_bytes=134, idle_age=328, priority=4,in_port=1,dl_vlan=1 actions=mod_vlan_vid:100,NORMAL
 cookie=0x0, duration=520.997s, table=0, n_packets=8, n_bytes=648, idle_age=511, priority=2,in_port=1 actions=drop
 cookie=0x0, duration=522.19s, table=0, n_packets=0, n_bytes=0, idle_age=522, priority=1 actions=NORMAL

Finally all packets of the instance that come out of the host compute will carry a vlan tag equaling the segmentation id of the network. When the packet reaches another compute host. All that need to be done is add the reverse flow rule to convert the vlan tag from the network’s segmentation id to the vlan id of the instances’ port.

root@pluto:~# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=644.387s, table=0, n_packets=0, n_bytes=0, idle_age=644, priority=3,in_port=6,dl_vlan=100 actions=mod_vlan_vid:1,NORMAL
 cookie=0x0, duration=646.165s, table=0, n_packets=8, n_bytes=648, idle_age=636, priority=2,in_port=6 actions=drop
 cookie=0x0, duration=647.464s, table=0, n_packets=3, n_bytes=126, idle_age=453, priority=1 actions=NORMAL

This set up will ensure that Instances belonging to the same L2 segment can reach each other, no matter they are in the same host or distributed across multiple host machines.

2 thoughts on “L2 connectivity in OpenStack using OpenvSwitch mechanism driver

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s