Cloud · IceHouse · Network Management · Neutron · OpenStack

Managing Openstack Internal/Data/External network in one interface

A common problem for people who want to try Openstack without a full blown hardware setup is that they have just one network interface. Openstack identifies three distinct networks

Internal Network

This is where all your inter process communication happens. This is where your mysql-server/queue-server etc are listening and this is where your services exchange information among themselves. On a proper set up this network should be isolated and secured and the interface connected to this network should not be added to any bridge.

Data Network

This is where your Instances talk to each other and to their network’s l3 and dhcp services. This network again should be isolated and secured. There can be more than one data network. The data networks are mapped to a physical networks which will be available for neutron to use using config file parameters. It is the physical network that you denote as ‘provider:physical_network’ in the ‘neutron net-create‘ API call. You need not worry about choosing the physical_network for each network you create as neutron will choose it for you if you did not.

bridge_mappings

Present inside the ‘ovs’ section of ‘ml2_conf.ini’. You tell Neutron which physical networks are available for use through this parameter. You also have to tell which bridge to use in order to reach that physical network.Ā  Thus ‘bridge_mappings’ is a comma separated list of ‘physical_network:bridge_name’ pairs. You also have to make sure the bridges that you mapped to physical networks exists on the host.

flat_networks

Present under ‘ml2_type_flat’ section. Configured in case of flat networks. This is just a comma separated list of physical networks that are flat(no vlan involved)

network_vlan_ranges

Present under ‘ml2_type_vlan’ section. Configured in case of vlan networks. This is similar to flat_networks except that for each physical networks there is a start and an end vlan appended with a ‘:’ between them.

local_ip

Present under ‘ovs’ section’. In case you are using GRE mode this parameter will tell neutron which IP to bind and run GRE on. This in turn determines which interface and network should be used as data network. So it is a good idea to use an interface other than the one used for internal network.

Finally, unless you are using ‘GRE’ alone, you have to add one of the host’s network interface to every bridge specified so that all physical networks are now bridged to their corresponding data network. Using a little trick you can even map more than one physical network to a data network.

External Network

This network is used for two purposes.

  1. To expose the services(nova-api, glance-api .,etc) to consumers outside of Openstack.
  2. To allow your Instances to be accessible from outside of Openstack, through floating-ip.

It is a good idea to use two external networks for the above two purposes. That way you can restrict all ports other than those on which your exposed services are listening.

In Neutron and external network is one on which you have ‘router:external’ set to true. Only then can you create foating-ips on it. In all other ways all rules that apply to physical networks also apply here. Normally you would want to chose a flat physical network for creating external network. Otherwise you would have to ask your network administrator to set up vlan on the switch port connecting to the machine running your l3-agent and things start to get ugly.

The host interface connecting to the external network should not have any form of security. You should allow security groups to do that job.

Full blown OpenStack setup

Using the same Interface for all Networks

Finally we arrive at the purpose of this blog. This blog gives you plenty of information and reason why you should not do this but while you are experimenting all is fair.

Assumptions:

  1. eth0 is the only available port
  2. bridge_mappings=Physnet1:br-eth1,External_network:br-ex
  3. network_vlan_ranges=Physnet1:100:200
  4. flat_networks=External_network
ovs-vsctl add-br br-eth0
ovs-vsctl add-port br-eth0 eth0
ifconfig br-eth0 <ip address of eth0> up
ip link set br-eth0 promisc on
ip link add proxy-br-eth1 type veth peer name eth1-br-proxy
ip link add proxy-br-ex type veth peer name ex-br-proxy
ovs-vsctl add-br br-eth1
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-eth1 eth1-br-proxy
ovs-vsctl add-port br-ex ex-br-proxy
ovs-vsctl add-port br-eth0 proxy-br-eth1
ovs-vsctl add-port br-eth0 proxy-br-ex
ip link set eth1-br-proxy up promisc on
ip link set ex-br-proxy up promisc on
ip link set proxy-br-eth1 up promisc on
ip link set proxy-br-ex up promisc on
  1. What we have done is added a new bridge br-eth0 and added eth0 to it.
  2. Assign eth0’s ip address to br-eth0 and set the interface in promiscuous mode.
  3. Then we create two veth pairs. In case you are not aware they are like virtual cables.
  4. We connect br-eth1 and br-ex to br-eth0 using the veth pairs.
  5. Then we enable promiscuous mode and bring up all the interfaces we use.
Single machine setup with 1 interface

Running Controller and Network on same host

Sometimes It is desired to have controller and Network node running on same machine and the machines have only two network interfaces each. The compute node requires only two interfaces as shown in the picture below. However in the network node we can combine the internal and external network by adding eth0 to br-ex and assigning br-ex with the ip address of eth0.

ovs-vsctl add-port br-ex eth0
ifconfig br-ex  up
ip link set eth0 up promisc on
2 Machine with 2 interfaces each

If both your servers have only a single nic you may follow the below setup.
On network/controller node

#add all bridges
ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-br br-eth1
ovs-vsctl add-br br-proxy
#Create Veth pairs
ip link add proxy-br-eth1 type veth peer name eth1-br-proxy
ip link add proxy-br-ex type veth peer name ex-br-proxy
#Attach bridges using veth pair
ovs-vsctl add-port br-eth1 eth1-br-proxy
ovs-vsctl add-port br-ex ex-br-proxy
ovs-vsctl add-port br-proxy proxy-br-eth1
ovs-vsctl add-port br-proxy proxy-br-ex
#Assign eth0's ip address to br-proxy
ifconfig br-proxy  up
#Bring up the interfaces 
ip link set eth1-br-proxy up promisc on
ip link set ex-br-proxy up promisc on
ip link set proxy-br-eth1 up promisc on
ip link set proxy-br-ex up promisc on

On the Compute node

ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth0
#Assign eth0's ip addres to br-eth1
ifconfig br-eth1  up
#Bring up the interfaces
ip link set eth0 up promisc on

The pictorial representation would be something like below

Openstack-Two-Machine-Single-Nic
Dual machine setup with single nic each
Advertisements

120 thoughts on “Managing Openstack Internal/Data/External network in one interface

  1. Hello akhilesh …awesome post. I did the single node configs as you mentioned above but my host is not being able to ssh the vm node on which i installed everything. I am using ubuntu 14.04 server vm which does not allow copy-pasting options.Please help me reach my vm through ssh so i can finish my assignment.
    Thanks šŸ™‚

  2. Hi Akilesh,

    Please provide the network configuration file for the single network interface and all in one setup.

  3. help

    2 node with single interface (controller+network) and (compute1)

    ifconfig in controller node

    root@controller:/var/log/neutron# ifconfig
    br-eth1 Link encap:Ethernet HWaddr ce:1d:1f:36:cf:49
    inet6 addr: fe80::5404:2aff:feab:719a/64 Scope:Link
    UP BROADCAST RUNNING MTU:1500 Metric:1
    RX packets:10396 errors:0 dropped:6865 overruns:0 frame:0
    TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:864343 (864.3 KB) TX bytes:648 (648.0 B)

    br-int Link encap:Ethernet HWaddr ce:25:ec:90:b5:46
    inet6 addr: fe80::c448:d3ff:fea2:d341/64 Scope:Link
    UP BROADCAST RUNNING MTU:1500 Metric:1
    RX packets:261535 errors:0 dropped:175829 overruns:0 frame:0
    TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:21299578 (21.2 MB) TX bytes:732 (732.0 B)

    br-proxy Link encap:Ethernet HWaddr d8:cb:8a:07:4e:13
    inet addr:10.1.2.35 Bcast:10.1.3.255 Mask:255.255.254.0
    inet6 addr: fe80::8fb:23ff:fe0e:76f7/64 Scope:Link
    UP BROADCAST RUNNING MTU:1500 Metric:1
    RX packets:291734 errors:0 dropped:182667 overruns:0 frame:0
    TX packets:18896 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:26705695 (26.7 MB) TX bytes:4205321 (4.2 MB)

    br-tun Link encap:Ethernet HWaddr 92:8d:75:d2:7e:40
    inet6 addr: fe80::c07e:69ff:fe67:5a56/64 Scope:Link
    UP BROADCAST RUNNING MTU:1500 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)

    eth0 Link encap:Ethernet HWaddr d8:cb:8a:07:4e:13
    inet6 addr: fe80::dacb:8aff:fe07:4e13/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:302974 errors:0 dropped:0 overruns:0 frame:0
    TX packets:19223 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:27789964 (27.7 MB) TX bytes:4229471 (4.2 MB)

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:1234468 errors:0 dropped:0 overruns:0 frame:0
    TX packets:1234468 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:215227086 (215.2 MB) TX bytes:215227086 (215.2 MB)

    root@controller:/var/log/neutron# ovs-vsctl show
    a620d4f6-a743-4e2b-8170-03a4c4d9bea4
    Bridge “br-eth1”
    Port “phy-br-eth1”
    Interface “phy-br-eth1″
    type: patch
    options: {peer=”int-br-eth1”}
    Port “eth1-br-proxy”
    Interface “eth1-br-proxy”
    Port “br-eth1”
    Interface “br-eth1”
    type: internal
    Bridge br-int
    fail_mode: secure
    Port br-int
    Interface br-int
    type: internal
    Port “int-br-eth1”
    Interface “int-br-eth1″
    type: patch
    options: {peer=”phy-br-eth1”}
    Port int-br-ex
    Interface int-br-ex
    type: patch
    options: {peer=phy-br-ex}
    Port int-br-proxy
    Interface int-br-proxy
    type: patch
    options: {peer=phy-br-proxy}
    Bridge br-proxy
    Port phy-br-proxy
    Interface phy-br-proxy
    type: patch
    options: {peer=int-br-proxy}
    Port “proxy-br-eth1”
    Interface “proxy-br-eth1”
    Port “eth0”
    Interface “eth0”
    Port br-proxy
    Interface br-proxy
    type: internal
    Port proxy-br-ex
    Interface proxy-br-ex
    Bridge br-tun
    fail_mode: secure
    Port “gre-0a010123”
    Interface “gre-0a010123″
    type: gre
    options: {df_default=”true”, in_key=flow, local_ip=”10.1.2.35″,
    Port br-tun
    Interface br-tun
    type: internal
    Port “gre-0a010224”
    Interface “gre-0a010224″
    type: gre
    options: {df_default=”true”, in_key=flow, local_ip=”10.1.2.35″,
    Port “gre-0a010023”
    Interface “gre-0a010023″
    type: gre
    options: {df_default=”true”, in_key=flow, local_ip=”10.1.2.35″,
    Port patch-int
    Interface patch-int
    type: patch
    options: {peer=patch-tun}
    Port “gre-c0a86401”
    Interface “gre-c0a86401″
    type: gre
    options: {df_default=”true”, in_key=flow, local_ip=”10.1.2.35″,
    ovs_version: “2.0.2”

    computer node ifconfig

    root@compute:/var/log/neutron# ifconfig
    br-eth1 Link encap:Ethernet HWaddr d8:cb:8a:07:50:5a
    inet addr:10.1.2.36 Bcast:10.1.3.255 Mask:255.255.254.0
    inet6 addr: fe80::702b:37ff:fefd:edee/64 Scope:Link
    UP BROADCAST RUNNING MTU:1500 Metric:1
    RX packets:292606 errors:0 dropped:0 overruns:0 frame:0
    TX packets:18517 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:26334671 (26.3 MB) TX bytes:4302934 (4.3 MB)

    br-int Link encap:Ethernet HWaddr 9e:8a:f3:08:b2:45
    inet6 addr: fe80::7cbe:44ff:fed9:ce1/64 Scope:Link
    UP BROADCAST RUNNING MTU:1500 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)

    eth0 Link encap:Ethernet HWaddr d8:cb:8a:07:50:5a
    inet6 addr: fe80::dacb:8aff:fe07:505a/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:304112 errors:0 dropped:0 overruns:0 frame:0
    TX packets:18556 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:27431658 (27.4 MB) TX bytes:4308232 (4.3 MB)

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:137 errors:0 dropped:0 overruns:0 frame:0
    TX packets:137 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:10081 (10.0 KB) TX bytes:10081 (10.0 KB)

    root@compute:/var/log/neutron# ovs-vsctl show
    131cc6cb-289f-4892-9dd3-6fe16e0881ac
    Bridge br-int
    fail_mode: secure
    Port br-int
    Interface br-int
    type: internal
    Bridge “br-eth1”
    Port “br-eth1”
    Interface “br-eth1”
    type: internal
    Port “eth0”
    Interface “eth0”
    ovs_version: “2.0.2”
    root@compute:/var/log/neutron#

    cannot create network

    root@controller:/etc/neutron/plugins/ml2# neutron agent-list
    +————————————–+——————–+————+——-+—————-+—————————+
    | id | agent_type | host | alive | admin_state_up | binary |
    +————————————–+——————–+————+——-+—————-+—————————+
    | 7f58f65f-21b6-413e-b617-b99fa962af19 | Open vSwitch agent | controller | šŸ™‚ | True | neutron-openvswitch-agent |
    | b68bb23e-ab41-4df1-bbef-181c28e52e24 | Metadata agent | controller | šŸ™‚ | True | neutron-metadata-agent |
    | bfcda382-0dc9-4794-a596-6e243ab0145e | DHCP agent | controller | šŸ™‚ | True | neutron-dhcp-agent |
    | ed663eb7-6e22-49ff-87a0-73bf0f815d36 | L3 agent | controller | šŸ™‚ | True | neutron-l3-agent |
    +————————————–+——————–+————+——-+—————-+—————————+
    root@controller:/etc/neutron/plugins/ml2# keystone catalog
    Service: image
    +————-+———————————-+
    | Property | Value |
    +————-+———————————-+
    | adminURL | http://controller:9292 |
    | id | 168f733030cc4027a743f0e19840b49c |
    | internalURL | http://controller:9292 |
    | publicURL | http://controller:9292 |
    | region | regionOne |
    +————-+———————————-+
    Service: compute
    +————-+————————————————————+
    | Property | Value |
    +————-+————————————————————+
    | adminURL | http://controller:8774/v2/a28fde0b7bec43a6a3f6159ada1dd1f1 |
    | id | 650edd5ef23842adb9859a819df18609 |
    | internalURL | http://controller:8774/v2/a28fde0b7bec43a6a3f6159ada1dd1f1 |
    | publicURL | http://controller:8774/v2/a28fde0b7bec43a6a3f6159ada1dd1f1 |
    | region | regionOne |
    +————-+————————————————————+
    Service: network
    +————-+———————————-+
    | Property | Value |
    +————-+———————————-+
    | adminURL | http://controller:9696 |
    | id | 5176b18efe8d4330bb49a81166142b19 |
    | internalURL | http://controller:9696 |
    | publicURL | http://controller:9696 |
    | region | regionOne |
    +————-+———————————-+
    Service: identity
    +————-+———————————-+
    | Property | Value |
    +————-+———————————-+
    | adminURL | http://controller:35357/v2.0 |
    | id | 85f6dcae15504313802f51d87ac7dc5a |
    | internalURL | http://controller:5000/v2.0 |
    | publicURL | http://controller:5000/v2.0 |
    | region | regionOne |
    +————-+———————————-+
    root@controller:/etc/neutron/plugins/ml2# nova network-create vmnet –fixed-range-v4=10.1.2./24 –bridge-interface=br-int –multi-host=T ERROR (BadRequest): 10.1.2./24 is not a valid ip network. (HTTP 400) (Request-ID: req-b65e90de-6556-40a3-ba9f-e4bf09d6e4e3)
    root@controller:/etc/neutron/plugins/ml2#

    ml2config.ini controller node

    [ml2]
    # (ListOpt) List of network type driver entrypoints to be loaded from
    # the neutron.ml2.type_drivers namespace.
    #
    type_drivers = flat,gre
    # Example: type_drivers = flat,vlan,gre,vxlan

    # (ListOpt) Ordered list of network_types to allocate as tenant
    # networks. The default value ‘local’ is useful for single-box testing
    # but provides no connectivity between hosts.
    #
    tenant_network_types = gre

    # Example: tenant_network_types = vlan,gre,vxlan

    # (ListOpt) Ordered list of networking mechanism driver entrypoints
    # to be loaded from the neutron.ml2.mechanism_drivers namespace.
    mechanism_drivers = openvswitch
    # Example: mechanism_drivers = openvswitch,mlnx
    # Example: mechanism_drivers = arista
    # Example: mechanism_drivers = cisco,logger
    # Example: mechanism_drivers = openvswitch,brocade
    # Example: mechanism_drivers = linuxbridge,brocade

    [ml2_type_flat]
    # (ListOpt) List of physical_network names with which flat networks
    # can be created. Use * to allow flat networks with arbitrary
    # physical_network names.
    #
    flat_networks = External
    # Example:flat_networks = physnet1,physnet2
    # Example:flat_networks = *
    #flat_networks = external
    [ml2_type_vlan]
    # (ListOpt) List of [::] tuples
    # specifying physical_network names usable for VLAN provider and
    # tenant networks, as well as ranges of VLAN tags on each
    # physical_network available for allocation as tenant networks.
    #
    # network_vlan_ranges =
    # Example: network_vlan_ranges = physnet1:1000:2999,physnet2

    [ml2_type_gre]
    # (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
    tunnel_id_ranges = 1:1000

    [ml2_type_vxlan]
    # (ListOpt) Comma-separated list of : tuples enumerating
    # ranges of VXLAN VNI IDs that are available for tenant network allocation.
    #
    # vni_ranges =

    # (StrOpt) Multicast group for the VXLAN interface. When configured, will
    # enable sending all broadcast traffic to this multicast group. When left
    # unconfigured, will disable multicast VXLAN mode.
    #
    # vxlan_group =
    # Example: vxlan_group = 239.1.1.1

    [securitygroup]
    # Controls if neutron security group is enabled or not.
    # It should be false when you use nova security group.
    enable_security_group = True
    enable_ipset = True
    firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

    [ovs]
    local_ip = 10.1.2.35
    enable_tunneling = True
    tunnel_type = gre
    bridge_mappings = External:br-ex,Inetnet1:br-eth1

    [agent]
    #tunnel_types = gre

    root@compute:/etc/neutron/plugins/ml2# cat ml2_conf.ini
    [ml2]
    # (ListOpt) List of network type driver entrypoints to be loaded from
    # the neutron.ml2.type_drivers namespace.
    #
    type_drivers = flat,gre
    # Example: type_drivers = flat,vlan,gre,vxlan

    # (ListOpt) Ordered list of network_types to allocate as tenant
    # networks. The default value ‘local’ is useful for single-box testing
    # but provides no connectivity between hosts.
    #
    tenant_network_types = gre
    # Example: tenant_network_types = vlan,gre,vxlan

    # (ListOpt) Ordered list of networking mechanism driver entrypoints
    # to be loaded from the neutron.ml2.mechanism_drivers namespace.
    mechanism_drivers = openvswitch
    # Example: mechanism_drivers = openvswitch,mlnx
    # Example: mechanism_drivers = arista
    # Example: mechanism_drivers = cisco,logger
    # Example: mechanism_drivers = openvswitch,brocade
    # Example: mechanism_drivers = linuxbridge,brocade

    # (ListOpt) Ordered list of extension driver entrypoints
    # to be loaded from the neutron.ml2.extension_drivers namespace.
    # extension_drivers =
    # Example: extension_drivers = anewextensiondriver

    [ml2_type_flat]
    # (ListOpt) List of physical_network names with which flat networks
    # can be created. Use * to allow flat networks with arbitrary
    # physical_network names.
    #
    flat_networks = External
    # Example:flat_networks = physnet1,physnet2
    # Example:flat_networks = *

    [ml2_type_vlan]
    # (ListOpt) List of [::] tuples
    # specifying physical_network names usable for VLAN provider and
    # tenant networks, as well as ranges of VLAN tags on each
    # physical_network available for allocation as tenant networks.
    #
    # network_vlan_ranges =
    # Example: network_vlan_ranges = physnet1:1000:2999,physnet2

    [ml2_type_gre]
    # (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
    tunnel_id_ranges = 1:1000

    [ml2_type_vxlan]
    # (ListOpt) Comma-separated list of : tuples enumerating
    # ranges of VXLAN VNI IDs that are available for tenant network allocation.
    #
    # vni_ranges =

    # (StrOpt) Multicast group for the VXLAN interface. When configured, will
    # enable sending all broadcast traffic to this multicast group. When left
    # unconfigured, will disable multicast VXLAN mode.
    #
    # vxlan_group =
    # Example: vxlan_group = 239.1.1.1

    [securitygroup]
    # Controls if neutron security group is enabled or not.
    # It should be false when you use nova security group.
    # enable_security_group = True
    firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    enable_security_group = True
    # Use ipset to speed-up the iptables security groups. Enabling ipset support
    # requires that ipset is installed on L2 agent node.
    # enable_ipset = True
    [ovs]
    local_ip = 10.1.2.36
    unnel_type = gre
    nable_tunneling = True
    bridge_mappings = External:br-ex,Inetnet1:br-eth1
    integration_bridge = br-eth1
    tunnel_bridge = br-eth1
    root@compute:/etc/neutron/plugins/ml2# cat ml2_conf.ini
    [ml2]
    # (ListOpt) List of network type driver entrypoints to be loaded from
    # the neutron.ml2.type_drivers namespace.
    #
    type_drivers = flat,gre
    # Example: type_drivers = flat,vlan,gre,vxlan

    # (ListOpt) Ordered list of network_types to allocate as tenant
    # networks. The default value ‘local’ is useful for single-box testing
    # but provides no connectivity between hosts.
    #
    tenant_network_types = gre
    # Example: tenant_network_types = vlan,gre,vxlan

    # (ListOpt) Ordered list of networking mechanism driver entrypoints
    # to be loaded from the neutron.ml2.mechanism_drivers namespace.
    mechanism_drivers = openvswitch
    # Example: mechanism_drivers = openvswitch,mlnx
    # Example: mechanism_drivers = arista
    # Example: mechanism_drivers = cisco,logger
    # Example: mechanism_drivers = openvswitch,brocade
    # Example: mechanism_drivers = linuxbridge,brocade

    # (ListOpt) Ordered list of extension driver entrypoints
    # to be loaded from the neutron.ml2.extension_drivers namespace.
    # extension_drivers =
    # Example: extension_drivers = anewextensiondriver

    [ml2_type_flat]
    # (ListOpt) List of physical_network names with which flat networks
    # can be created. Use * to allow flat networks with arbitrary
    # physical_network names.
    #
    flat_networks = External
    # Example:flat_networks = physnet1,physnet2
    # Example:flat_networks = *

    [ml2_type_vlan]
    # (ListOpt) List of [::] tuples
    # specifying physical_network names usable for VLAN provider and
    # tenant networks, as well as ranges of VLAN tags on each
    # physical_network available for allocation as tenant networks.
    #
    # network_vlan_ranges =
    # Example: network_vlan_ranges = physnet1:1000:2999,physnet2

    [ml2_type_gre]
    # (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
    tunnel_id_ranges = 1:1000

    [ml2_type_vxlan]
    # (ListOpt) Comma-separated list of : tuples enumerating
    # ranges of VXLAN VNI IDs that are available for tenant network allocation.
    #
    # vni_ranges =

    # (StrOpt) Multicast group for the VXLAN interface. When configured, will
    # enable sending all broadcast traffic to this multicast group. When left
    # unconfigured, will disable multicast VXLAN mode.
    #
    # vxlan_group =
    # Example: vxlan_group = 239.1.1.1

    [securitygroup]
    # Controls if neutron security group is enabled or not.
    # It should be false when you use nova security group.
    # enable_security_group = True
    firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    enable_security_group = True
    # Use ipset to speed-up the iptables security groups. Enabling ipset support
    # requires that ipset is installed on L2 agent node.
    # enable_ipset = True
    [ovs]
    local_ip = 10.1.2.36
    unnel_type = gre
    nable_tunneling = True
    bridge_mappings = External:br-ex,Inetnet1:br-eth1
    integration_bridge = br-eth1
    tunnel_bridge = br-eth1

    how to create vm network

  4. Hi,
    I am trying to get a multinode setup of openstack wherein I have two nodes (Controller/compute/network, compute). I am unable to do ssh to the VM running on remote compute node.

    I am using Ubuntu 14.04 LTS with openstack stable/kilo.

    My local.conf for controller/network/compute node is as under :
    [[local|localrc]]
    disable_service n-net
    enable_service n-cauth
    enable_service q-svc
    enable_service q-agt
    enable_service q-dhcp
    enable_service q-l3
    enable_service q-lbaas
    enable_service q-meta
    enable_service quantum
    enable_service q-fwaas
    enable_service heat
    enable_service h-api
    enable_service h-api-cfn
    enable_service h-api-cw
    enable_service h-eng
    #enable_service swift
    #enable_service s-proxy s-object s-container s-account
    enable_service ceilometer-acompute,ceilometer-acentral,ceilometer-collector
    enable_service ceilometer-alarm-singleton,ceilometer-alarm-notifier
    enable_service ceilometer-api
    enable_service ceilometer-anotification
    enable_service tempest
    enable_service ceilometer-cluster-collector
    enable_service rally
    enable_service tr-
    enable_service zaqar- ir-

    enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas stable/liberty

    #RECLONE=yes
    MULTI_HOST=1
    #OFFLINE=True
    GIT_BASE=https://git.openstack.org
    HOST_IP=10.138.97.244

    Q_PLUGIN=ml2
    ENABLE_TENANT_VLANS=True
    # GRE tunnel configuration
    Q_PLUGIN=ml2
    ENABLE_TENANT_TUNNELS=True
    Q_ML2_TENANT_NETWORK_TYPE=gre

    DATABASE_PASSWORD=openstack
    RABBIT_PASSWORD=openstack
    SERVICE_TOKEN=openstack
    SERVICE_PASSWORD=openstack
    ADMIN_PASSWORD=openstack
    LOG=True
    DEBUG=False
    LOGFILE=/opt/stack/logs/stack.sh.log
    LOGDIR=/opt/stack/logs
    LOG_DIR=/opt/stack/logs
    SCREEN_LOGDIR=/opt/stack/logs/screen

    My compute local.conf is
    [[local|localrc]]
    #IP Details
    HOST_IP=10.138.97.143 #Add the Compute node Management IP Address
    SERVICE_HOST=10.138.97.244 #Add the cotnrol Node Management IP Address here
    #Instance Details
    MULTI_HOST=1
    #config Details
    #RECLONE=yes #Make thgis “no” after stacking successfully once
    OFFLINE=True #Uncomment this line after stacking successfuly first time.
    VERBOSE=True
    LOG_COLOR=True
    LOGFILE=/opt/stack/logs/stack.sh.log
    SCREEN_LOGDIR=/opt/stack/logs
    #Passwords
    ADMIN_PASSWORD=openstack
    MYSQL_PASSWORD=openstack
    RABBIT_PASSWORD=openstack
    SERVICE_PASSWORD=openstack
    SERVICE_TOKEN=46a75e61f0ab4562bf8bb5afd8bb56a0
    #Services
    ENABLED_SERVICES=n-cpu,rabbit,neutron,agent,ceilometer-acompute,q-agt,q-
    #ML2 Details
    Q_PLUGIN=ml2
    Q_ML2_TENANT_NETWORK_TYPE=gre

    #Details of the Control node for various services
    [[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
    Q_HOST=$SERVICE_HOST
    MYSQL_HOST=$SERVICE_HOST
    RABBIT_HOST=$SERVICE_HOST
    GLANCE_HOSTPORT=$SERVICE_HOST:9292
    KEYSTONE_AUTH_HOST=$SERVICE_HOST
    KEYSTONE_SERVICE_HOST=$SERVICE_HOST
    NOVA_VNC_ENABLED=True
    NOVNCPROXY_URL=”http://10.138.97.244:6080/vnc_auto.html” #Add Controller Node IP address
    VNCSERVER_LISTEN=$HOST_IP
    VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN

    Any help would be highly appreciated.

  5. Would you be willing to update your blog post (especially case 1, the all in one node with a single interface) to include the complete set of commands or /etc/network/interfaces to prepare the interfaces and bridges, followed by the “neutron” commands to create the openstack networks and subnets?

  6. Hi, in the third scenario (2 hosts, single nic) you do not add eth0 as port to br-proxy. Could you explain why? thanks

  7. Hi,

    I have created my network as suggested by you. But there is some problem with query router. I can’t ping my gateway(extnera) from query router(name space). It will be helpful if you can put some light on it.

  8. Thanx for this great post
    If you can give me the right configuration of Local.conf file to install devstack Using the same Interface for all Networks.

    1. There is a comment is the same post describing the procedure. I will include it in the post itself later. But for now you can look ‘https://fosskb.wordpress.com/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface/comment-page-1/#comment-493’

  9. I installed openstack through packstack.
    but i have eth0.
    so if i will follow seeing your instruduntion, can i complete?

    1. Go ahead. It should work. You have to do it directly on the machine and not through ssh, as you might loose connection at one stage.

      1. i m doing this procedure on a vm installed in my virtualbox.i kept bridged connection so that i can access it with ssh.as you said rightly,i did lose the connection but i brought it back up using :
        ifconfig eth0 0
        dhclient br-eth0

        now everything is up and running but i cannot access my vms via floating ips…virtual router’s one interface is sn1 network and another is en1 but for en1 i kept my external network’s gateway as default gateway.
        You have performed these steps on a physical machine but i am running it on a vm so let me know if i need to put some extra configs.

  10. Hi Akilesh. Thanks for the great post. Based on the discussion below this post seems that OpenStack community really needs solutions other than 3 nodes with 3 NICs (as described on doc.openstack.org installation instructions).

    I’m new to Openstack and I’m trying to create a “sandbox” setup made of 2 machines (1 Controller/Network machine, 1 Compute machine). I’m still tryng to figure out how to configure neutron so i have few questions here.

    1. The description for 2 Machines/ 2 NICs states that I should create br-ex and add eth0 to it. It doesn’t say anything about br-eth1 although the figure shows that I should create one. Is this correct?

    2. In my solution the management/external network is a LAB network which already has a DHCP server (I setup data network as a closed network between Controller/Network node and compute node)
    Is it possible to configure the “Floating” network in a way that VMs will get DHCP from the LAB network via bridge and not from neutron?

    I’ll be very grateful for help
    Thank You
    Tomasz

    1. This post is for first time users, trying to get a taste of openstack using whatever little hardware they have. The official documentation include information needed for building a full blown infrastructure.
      1. br-eth1 is also necessary. In fact all bridges that you have configured in ‘ml2_conf.ini’ should be created.
      2. floating ip should be created and used by OpenStack users as and when required and only Neutron can allocate the same. Neutron will allocate floatingip from the subnet created for the external network.
      In case you share the external network with other devices that are being managed by dhcp server then you can use allocation pool option when creating subnet on external network. That way you can use a subset of the ip address’ on external network for openstack, while the remaining can be used by other devices on your network.
      Be sure to read through OpenStack documentation in case you later want a production network. They have lot of information regarding it.

    1. Here is my configuration done on Single node and single NIC.
      root@vsa-icehouse01:~# ovs-vsctl add-br br-eth0
      root@vsa-icehouse01:~# ovs-vsctl add-port br-eth0 em1
      root@vsa-icehouse01:~# ifconfig br-eth0 10.99.14.14 up
      root@vsa-icehouse01:~# ip link set br-eth0 promisc on
      root@vsa-icehouse01:~# ip link add proxy-br-eth1 type veth peer name eth1-br-proxy
      root@vsa-icehouse01:~# ip link add proxy-br-ex type veth peer name ex-br-proxy
      root@vsa-icehouse01:~# ovs-vsctl add-br br-eth1
      root@vsa-icehouse01:~# ovs-vsctl add-br br-ex
      root@vsa-icehouse01:~# ovs-vsctl add-port br-eth1 eth1-br-proxy
      root@vsa-icehouse01:~# vi xx
      root@vsa-icehouse01:~# ovs-vsctl add-port br-eth1 eth1-br-proxy
      ovs-vsctl: cannot create a port named eth1-br-proxy because a port named eth1-br-proxy already exists on bridge br-eth1
      root@vsa-icehouse01:~# bash -x ./xx
      + ovs-vsctl add-port br-eth1 eth1-br-proxy
      ovs-vsctl: cannot create a port named eth1-br-proxy because a port named eth1-br-proxy already exists on bridge br-eth1
      + ovs-vsctl add-port br-ex ex-br-proxy
      + ovs-vsctl add-port br-eth0 proxy-br-eth1
      + ovs-vsctl add-port br-eth0 proxy-br-ex
      + ip link set eth1-br-proxy up promisc on
      + ip link set ex-br-proxy up promisc on
      + ip link set proxy-br-eth1 up promisc on
      + ip link set proxy-br-ex up promisc on
      root@vsa-icehouse01:~# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.ORIG
      root@vsa-icehouse01:~# vi /etc/neutron/metadata_agent.ini
      root@vsa-icehouse01:~# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.ORIG
      root@vsa-icehouse01:~# vi /etc/neutron/dhcp_agent.ini
      root@vsa-icehouse01:~# vi /etc/neutron/dhcp_agent.ini
      root@vsa-icehouse01:~# cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.ORIG
      root@vsa-icehouse01:~# vi /etc/neutron/l3_agent.ini
      root@vsa-icehouse01:~# service neutron-server restart; service neutron-plugin-openvswitch-agent restart;service neutron-metadata-agent restart; service neutron-dhcp-agent restart; service neutron-l3-agent restart
      neutron-server stop/waiting
      neutron-server start/running, process 18947
      neutron-plugin-openvswitch-agent stop/waiting
      neutron-plugin-openvswitch-agent start/running, process 18963
      neutron-metadata-agent stop/waiting
      neutron-metadata-agent start/running, process 18976
      neutron-dhcp-agent stop/waiting
      neutron-dhcp-agent start/running, process 18999
      stop: Unknown instance:
      neutron-l3-agent start/running, process 19021
      root@vsa-icehouse01:~# neutron agent-list
      +————————————–+——————–+—————-+——-+—————-+
      | id | agent_type | host | alive | admin_state_up |
      +————————————–+——————–+—————-+——-+—————-+
      | 23d4174f-cbe1-4042-be04-bd5a1fdeb7aa | Open vSwitch agent | vsa-icehouse01 | šŸ™‚ | True |
      | 3773d2a1-a0bb-4edd-af21-0b38e242280f | L3 agent | vsa-icehouse01 | šŸ™‚ | True |
      | 49ced6a0-0049-454b-a0c6-e3eb65f68173 | DHCP agent | vsa-icehouse01 | šŸ™‚ | True |
      | 729a9360-f3aa-4916-a7cc-cb252679f019 | Metadata agent | vsa-icehouse01 | šŸ™‚ | True |
      +————————————–+——————–+—————-+——-+—————-+
      root@vsa-icehouse01:~#

      1. after updating /etc/network/interfaces file as below and rebooted , it looks working.
        got info from http://zcentric.com/2014/07/07/openvswitch-kvm-libvirt-ubuntu-vlans-the-right-way/

        = = = = = = = =
        root@vsa-icehouse01:~# cat /etc/network/interfaces
        # This file describes the network interfaces available on your system
        # and how to activate them. For more information, see interfaces(5).

        # The loopback network interface
        auto lo
        iface lo inet loopback

        # The primary network interface
        auto em1
        iface em1 inet manual
        ovs_bridge br-eth0
        ovs_type OVSPort
        adress 0.0.0.0
        # address 10.99.14.14
        # netmask 255.255.255.0
        # network 10.99.14.0
        # broadcast 10.99.14.255
        # gateway 10.99.14.1
        # # dns-* options are implemented by the resolvconf package, if installed
        # dns-nameservers 15.226.142.15
        # dns-search tplab.tippingpoint.com
        # The bridgr interface
        auto br-eth0
        iface br-eth0 inet static
        address 10.99.14.14
        netmask 255.255.255.0
        network 10.99.14.0
        broadcast 10.99.14.255
        gateway 10.99.14.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 15.226.142.15
        dns-search tplab.tippingpoint.com
        ovs_type OVSBridge
        ovs_ports br-eth0
        bridgr_porte em1
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0

        root@vsa-icehouse01:~#

        = = = = = = = = = = =

      2. Thank you Vinay for sharing. As I had mentioned in the post your host will not be able to receive any data on an interface that was added to any ovs bridge. That is why assigning ip address to the bridge is necessary. You would not be able to do it(you will loose ssh access as soon as you add the interface to the bridge) unless you had direct keyboard access to the host. Setting the configuration in the interfaces file is the permanent solution.

      3. the instructions should work on any distro. ‘eth1-br-proxy’ is a veth device. You have to create it before using it. The relevant command is ‘ip link add proxy-br-eth1 type veth peer name eth1-br-proxy’. May be you missed that step.

      4. Hi, I’m having a similar issue. My eth0 is 192.168.1.219. I have followed the tutorial by creating all the bridges and interfaces. What should be the subnet value of the network which runs the VM. I have chosen a value of 192.168.1.0/24 but my host machine is not able to ping the VM.

      5. The host can not ping the vm directly. Do you mean host is not able to reach vm’s floatingip?

    1. While launching the instance you can select two networks and two nics will be added by nova, one each in a network. If you want to add interfaces at run time check the ‘update-server’ command of the nova api reference.

  11. Hi, thanks for the detail post. I tried to setup the environment on a single node with 1 NIC.
    I can create the instances but the network still have some problems and I have some questions.
    1. I created the br-ex, br-eth1, br-int, but how are they been used? I mean who will use br-eth1/br-ex/br-int and how do they use it? I can only find some configurations like
    “./neutron/plugins/ml2/ml2_conf.ini:bridge_mappings=External:br-ex,Intnet1:br-eth1
    ./neutron/l3_agent.ini:external_network_bridge = br-ex

    2. my external network is through dhcp, how to handle this? It seems that the floating IPs are not assigned by the external dhcp server.

    Please help on this, thanks:)

    1. These bridges are used by neutron to ensure connectivity between your instances and various neutron services like the l3-agent, dhcp-agent etc. These details are explained in this post.

      floatingips are not allocated by your dhcp server in external network. The floatingips are allocated on the external network and assigned to instances as and when you create them using floatingip-create command. This is explained in this post , towards the end of the post.

      1. hi, where are the detailed explanations? šŸ™‚ the links are missed?

        I did a grep in /etc and found something below.
        In l3_agent.ini, external_network_bridge = br-ex.
        In dhcp_agent.ini, ovs_integration_bridge=br-int(I guess this is the default value?)
        But I didn’t find anything about br-eth1 except in ml_conf.ini, under [ovs] there’s a bridge_mappings, which specifies br-eth1 as vlan. Is there anything else?

        For the floating IP, if they are not allocated by the dhcp server, then how can they be visited by others from external network, a little confused about this. I think each instance will have an internal address, and it is mapped with some external “global” ip address through IPtable, right?

        Thanks

      2. Sorry missed the links. I have edited the my previous reply. Kindly read through both the links. I hope at the end you might understand how this works. If not we shall discuss further.

      3. hi akilesh
        Thanks for the great posts, I think it solved some of my questions and raises some others:)
        In config file ml2_conf.ini, under [ml2_type_flat], flat_networks=External.
        Here the “External” is just a general name or it should match the ‘physical_network’ in TABLE ml2_network_segments?
        I tried to use Horizon to create the external network, it will set network_type to vlan and physical_network to Intnet1 by default. So do I have to use command line to create it manually with provider:physical_network=External and provider:network_type=flat?

        Thanks

      4. The networks(both flat_networks and vlan_networks) that you define in ‘ml2_conf.ini’ are called physical networks. When you create a network using ‘neutron net-create’ or using horizon, visualize it as these networks will be created on top of these physical networks. These are exactly what you find in database. Each network in openstack would be mapped to a physical network.

        For you second question, you are correct you have to use the command line for those options.

      5. hi Johnson
        My question is whether the names(External, physnet1, physnet2, etc..) in ml2_conf.ini should be the same as the ones created with neutron net-create, what if a tenant create the same physical_network name as the provider physical network? (for example, I want to use flat for external networks and vlan for tenant networks), if their names are the same, it will cause some trouble, right?

      6. That configurations is not at all possible. Each physical network should be unique, whether vlan or flat.

    2. hi Johnson
      In my opinion, administrator will create the provider network while the tenant will create the tenant network. As a tenant, he may not aware what the provider network “physical_network” name is, maybe they both choose the same name, or there’s something I misunderstand here?

      1. You need not specify the provider:physical_network for each network you create. Neutron will do it for you. The administrator alone will specify the parameter explicitly to create some shared networks. Even if a user is trying to use something which has already been assigned to another user, neutron is smart enough to throw errors. You can try creating two network on a single flat physical network and check for yourselves. If however you do the same on a vlan network it would be created but the networks will have different segmentation id. If you explicitly specify the segmentation id that has already been used, neutron will again throw error.

      2. Thanks akilesh, I missed the segmentation id.

        Now I have followed the steps in this post(at least I though I had), when starting a new instance, it will not be able to get the metadata(some address 169.254.169.254).
        The qdhcp and qrouter can ping each other and instance can get an ip address from qdhcp(from the log), but I can’t ping the instance in qdhcp or qrouter.(At that time the instance is stucking trying to get metadata).

        Any idea on what’s wrong of this? What may cause an instance not able to ping the router?

      3. I do not understand what you mean by ‘qdhcp and qrouter can ping each other’. Tell me
        1. Have you created a router and attached the instance’s subnet to the router?
        2. Can the instance ping the router?

      4. Sorry I didn’t express myself clearly.
        I’ve created the router and the instance can’t ping it.
        From the host machine with “ip netns list” I can see 2 netns qrouter-xxx and qdhcp-xxx
        With “ip netns exec qrouter-xxx ifconfig”, there are 3 interfaces, lo, qg-xxx(for externel network IP 10.239.67.6) and qr-xxx(internal network 10.0.1.1). With “ip netns exec qdhcp-xxx ifconfig” there are 2 interfaces, one is lo and the other is the internal dhcp address(10.0.1.101, I set the subnet allocation pool from 10.0.1.100 to 10.0.1.120)

        For “qdhcp and qrouter can ping each other” I mean if I enter the qdhcp namespace I can ping qrouter’s internal ip addr.(ip netns exec qdhcp-xxx ping 10.0.1.1 is OK and ip netns exec qrouter-xxx ping 10.0.1.101 is also OK). But neither can ping 10.0.1.106, which is the ip address of the instance. And in the console instance, it can’t ping 10.0.1.101 or 10.0.1.1.

        There must be something wrong but I don’t know where to check

      5. A little correction. The qrouter-xxx and qdhcp-xxx are namespaces. They are used to have different isolated network stack on the same host machine. The qrouter-xxx namespace holds interfaces of the router you create, while the qdhcp-xxx namespace holds the interface to which your dhcp server(dnsmasq) attaches to. Further a lot of people face this issue and I am unable to answer them just because the number of causes for this could be too many and probing each one and explaining it to the users is a never ending process. I’ll try my best on this one though.

      6. something more.
        The instance seems can get the internal ip at boot time(from the log). Then it will stuck a long time trying to get metadata and failed. Then I enter the instance, do a ifdown and ifup, it will not be able to get an IP anymore.

        Is it caused by some iptable rules for qbr which connects the tap of instances and the qvb?
        I checked the iptable rules of qbr, don’t find anything suspicious.

        -A neutron-filter-top -j neutron-openvswi-local
        -A neutron-openvswi-FORWARD -m physdev –physdev-out tap76dfb669-5e –physdev-is-bridged -j neutron-openvswi-sg-chain
        -A neutron-openvswi-FORWARD -m physdev –physdev-in tap76dfb669-5e –physdev-is-bridged -j neutron-openvswi-sg-chain
        -A neutron-openvswi-INPUT -m physdev –physdev-in tap76dfb669-5e –physdev-is-bridged -j neutron-openvswi-o76dfb669-5
        -A neutron-openvswi-i76dfb669-5 -m state –state INVALID -j DROP
        -A neutron-openvswi-i76dfb669-5 -m state –state RELATED,ESTABLISHED -j RETURN
        -A neutron-openvswi-i76dfb669-5 -s 10.0.1.101/32 -p udp -m udp –sport 67 –dport 68 -j RETURN
        -A neutron-openvswi-i76dfb669-5 -j neutron-openvswi-sg-fallback
        -A neutron-openvswi-o76dfb669-5 -p udp -m udp –sport 68 –dport 67 -j RETURN
        -A neutron-openvswi-o76dfb669-5 -j neutron-openvswi-s76dfb669-5
        -A neutron-openvswi-o76dfb669-5 -p udp -m udp –sport 67 –dport 68 -j DROP
        -A neutron-openvswi-o76dfb669-5 -m state –state INVALID -j DROP
        -A neutron-openvswi-o76dfb669-5 -m state –state RELATED,ESTABLISHED -j RETURN
        -A neutron-openvswi-o76dfb669-5 -j RETURN
        -A neutron-openvswi-o76dfb669-5 -j neutron-openvswi-sg-fallback
        -A neutron-openvswi-s76dfb669-5 -s 10.0.1.106/32 -m mac –mac-source FA:16:3E:88:AE:EA -j RETURN
        -A neutron-openvswi-s76dfb669-5 -j DROP
        -A neutron-openvswi-sg-chain -m physdev –physdev-out tap76dfb669-5e –physdev-is-bridged -j neutron-openvswi-i76dfb669-5
        -A neutron-openvswi-sg-chain -m physdev –physdev-in tap76dfb669-5e –physdev-is-bridged -j neutron-openvswi-o76dfb669-5
        -A neutron-openvswi-sg-chain -j ACCEPT
        -A neutron-openvswi-sg-fallback -j DROP
        -A nova-api-INPUT -d 10.239.67.77/32 -p tcp -m tcp –dport 8775 -j ACCEPT

      7. A few questions. Is this the iptable rule of the host machine on the default namespace? , What distro are you using? Which document you used to install openstack? and Please tell me you are not using nova-network.

      8. I think I found the root cause.
        The system had also installed docker, which will create a VNIC and a bunch of routing rules, I think it somehow affect the openstack. Also I have always set http_proxy and https_proxy, which will have trouble while getting metadata.
        After purge docker and unset http_proxy, I can get the internal IP now.

        Thanks!

      9. Great find. As you see even I am still learning from user’s comments . Keep your findings documented somewhere.

      10. Also I am planning for a debugging guide shortly with all possible root causes. Your experience will surely be documented there. Thanks for the info.

  12. I have one NIC card on Network and Compute node. I installed openstack Ubuntu14.04 Icehouse. Can you help me how to create internal and external interfaces using one NIC. I am confused after reading all the posts here.

    1. Hi,
      The post has both commands and supporting pictorial representations of the setup. Beyond these I will not be able to provide any direct support to you as of now. If you have trouble understanding any part of the post kindly let me know what your exact doubt is. I can clarify.

  13. Sorry that it seems my network problem is out of focus…
    I have updated the experiment result on 9/9/2014.
    Can it help to dig into the problem and anything else i can help to find the root cause?

    1. Sorry for the delay. I assume eth0 is in promiscous(since you say the packets reach br-eth0(in my case br-proxy)).
      If you read my other post on l3 I have explained in detail what steps are taken my neutron l3 agent to do natting of floatingip to instance’s private ip. I hope you read through it. Launching instances in external network is not the correct thing to do(Although It might work in case of single machine setup alone). Further from you result the only possible cause is ‘ip forwarding is not enabled in your machine’. Please check ‘https://fosskb.wordpress.com/2014/06/25/a-bite-of-virtual-linux-networking/’ under section ‘ip forwarding’. If you have enabled but still have problems post the output of ‘ip netns exec qrouter- iptables -t nat -nvL’ and ‘ip tables exec qrouter- ip link show/ip addr show’.

      1. Hi. I viewed your logs and there is something seriously wrong with the output of ‘ip netns exec qrouter- iptables -t nat -nvL’ that you posted. Below is the excerpt.

        Chain neutron-l3-agent-PREROUTING (1 references)
        pkts bytes target prot opt in out source destination
        20 1200 REDIRECT tcp -- * * 0.0.0.0/0 169.254.169.254 tcp dpt:80 redir ports 9697
        1 84 DNAT all -- * * 0.0.0.0/0 192.168.1.101 to:192.168.2.12

        Chain neutron-l3-agent-float-snat (1 references)
        pkts bytes target prot opt in out source destination
        21 1764 SNAT all -- * * 192.168.3.10 0.0.0.0/0 to:192.168.1.101

        I have highlighted the problem. The nat happens from 192.168.1.101 to 192.168.2.12 during prerouting phase, which is what happens to incomming packets, whereas the outgoing packets are natted from 192.168.3.10 to 192.168.1.101 which is entirely different. I am still unsure how this error crept in or if this was a copy paste error when you posted the log.

        Please stop the neutron-l3-agent, then clear all iptable rules inside the router namespace and start the neutron-l3-agent again.

        service neutron-l3-agent stop
        ip netns qrouter- iptables --flush
        service neutron-l3-agent start

        Check the rules again and let me know if the problem is corrected. Again sorry for delay I was caught up with other work.

      2. Sorry that it’s just a copy paste error, it should be 192.168.2.12.
        I also try your suggestion to refresh iptables, but the problem still exist:
        external PC can’t ping VM instance via floating IP & VM instance can’t ping external PC.

      3. can you execute the same command (ip netns exec qrouter- iptables -t nat -nvL) and post it again.

    2. Hi. Sorry, I wasn’t able to spot any problem with iptable rules. If all interfaces(including the virtual ones you create manually) are in promiscous mode, security group rules allow ingress access for the protocol(icmp/ssh) you are using, then it should work. I have tested the methodology described in this post and it does work. May be your setup has some problem outside of openstack.

  14. Hi Akilesh,

    Nice post.

    I have a single physical server where DevStack is installed. It serves as the controller, network and compute node. The machine has one NIC, but I need to create 2 networks – one for external and another for management. So I created a virtual NIC – eth0:1 with 192.168.0.99 as the IP address. The host IP is 172.26.1.74. Then created 2 bridges as follows :
    sudo ovs-vsctl add-br br-mng
    sudo ovs-vsctl add-br br-ext
    sudo ovs-vsctl add-port br-mng eth0:1
    sudo ovs-vsctl add-port br-ext eth0
    I also created 2 networks namely – external and management. The issue is when I create a VM instance on the management network, it is not reachable(unable to ping).

    Following is my /etc/network/interface details:

    auto lo
    iface lo inet loopback

    auto eth0
    iface eth0 inet manual
    up ip address add 0.0 dev $IFACE
    up ip link set $IFACE up
    down ip link set $IFACE down
    auto br-ext
    iface br-ext inet static
    address 172.26.1.74
    netmask 255.255.255.0
    network 172.26.1.0
    broadcast 172.26.1.255
    gateway 172.26.1.1
    # dns-* options are implemented by the resolvconf package, if installed
    dns-nameservers 172.26.11.156 172.21.133.10
    dns-search oss.vyatta.net
    auto eth0:1
    iface eth0:1 inet manual
    up ip address add 0.0 dev $IFACE
    up ip link set $IFACE up
    down ip link set $IFACE down
    auto br-mng
    iface br-mng inet static
    address 192.168.0.99
    netmask 255.255.0.0
    network 192.168.0.0
    broadcast 192.168.255.255

      1. eth0 NIC IP 192.168.1.5
        eth0:0 first NIC alias: 192.168.1.6

        To setup eth0:0 alias type the following command as the root user:
        # ifconfig eth0:0 192.168.1.6 up

        Verify alias is up and running using following command:
        # ifconfig -a

        # ping 192.168.1.6
        However, if you reboot the system you will lost all your alias. To make it permanent you need to add it network configuration file.
        Debian / Ubuntu Linux Instructions

        You can configure the additional IP addresses automatically at boot with another iface statement in /etc/network/interfaces:
        # vi /etc/network/interfaces

        Append text as follows:

        auto eth0:1
        iface eth0:1 inet static
        name Ethernet alias LAN card
        address 192.168.1.7
        netmask 255.255.255.0
        broadcast 192.168.1.255
        network 192.168.1.0

        Save and close the file. Restart the network:
        # /etc/init.d/networking restart

  15. For a more secure variant, apply VLAN tagging to your br-eth0 and br-eth1 patch port, and segregate your management and external networks. Use ovs-vsctl to make and link these ports in one step:
    ovs-vsctl — add-port br-eth0 ptch-eth0-eth1 tag=1 — set interface ptch-eth0-eth1 type=patch options:peer=ptch-eth1-eth0 — add-port br-eth1 ptch-eth1-eth0 — set interface ptch-eth1-eth0 type=patch options:peer=ptch-eth0-eth1

    This makes the veth patch port an access port with vlan=1, so traffic dumped onto “br-eth1” always ends up on vlan1 on your physical network. You can do the same to make a bridge for ‘eth2’ that always ends up on vlan2. Enabled vlan 1 and 2 on your switch (in addition to 100-200 for your tenant networks). Now you still have the shared 1gbit link, but you don’t have the security concerns of leaky traffic between networks.

  16. Another question:
    When I configure eth0 to be part of ex-br and I assign the bridge the former IP of the port like so:
    ovs-vsctl add-port br-ex eth0;
    ifconfig br-ex 128.131.168.152 up
    after restarting the network service “service network restart” the bridge looses its IP address. I use CentOS and tried to create a file in /etc/sysconfig/network-scripts/ifcfg-br-ex and define an IP address in it, but it gets ignored.
    Additionally when I create a bridge manually without using ovs-vsctl, can it still be used by OpenVSwitch?
    Thanks

    1. Can you paste the /etc/sysconfig/network-scripts/ifcfg-br-ex file here? Also can you paste the command you used to create bridges?

      1. DEVICE=br-ex
        TYPE=Bridge
        ONBOOT=yes
        BOOTPROTO=static
        IPADDR=128.131.168.152
        NETMASK=255.255.255.0
        GATEWAY=128.131.168.100

        and for eth0

        DEVICE_INFO_HERE
        ONBOOT=yes
        BOOTPROTO=none
        PROMISC=yes
        BRIDGE=br-ex

        Although with this config, I would get an error when I restart the network service as the system tells my that the bridge already exists.
        The bridge was created with ovs-vsctl add-br br-ex

      1. Thanks for the link. But in general how would you accomplish that the configuration of OpenVSwitch bridges stays the same without a script after restarting the network service?

    2. The openvswitch bridges, their ports and their configuration are maintained by ovsdb. Assigning IP address to bridges instead of assigning to the port is more of a hack. Openvswitch does not keep track of it. Making use of network-scripts is the only option left. A more crude way is to create a shell script and invoke it on bootup.

      1. Okay so if I follow the steps from the official documentation: http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-network-node.html
        ovs-vsctl add-br br-ex
        ovs-vsctl add-port br-ex eth0
        This would result into a connection lost, as eth0 is not reachable any more. According to http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEAD “A physical Ethernet device that is part of an Open vSwitch bridge should not have an IP address. If one does, then that IP address will not be fully functional.” which means that “You can restore functionality by moving the IP address to an OpenvSwitch “internal” device, such as the network device named after the bridge itself”.

        So essentially the documentation of OpenvSwitch proposes a hack of assigning an IP address to a bridge?
        What is the official way of configuring OpenStack then, I am kind of confused …
        Cheers

      2. It is a hack. The proper way to do is to use eth0 for internal network and other interfaces(eth1, eth2 etc) to add to br-eth1(data network) and br-ex(external network). eth0 is not added to any bridge and so is available for the host and can have a valid ip address assigned.

  17. Thanks for this useful blog entry!
    I think you made a mistake for the last configuration scenario, instead of:
    #Attach bridges using veth pair
    ovs-vsctl add-port br-eth1 eth1-br-proxy
    ovs-vsctl add-port br-ex ex-br-proxy
    ovs-vsctl add-port br-eth0 proxy-br-eth1
    ovs-vsctl add-port br-eth0 proxy-br-ex
    It should be:
    ovs-vsctl add-port br-eth1 eth1-br-proxy
    ovs-vsctl add-port br-ex ex-br-proxy
    ovs-vsctl add-port br-proxy proxy-br-eth1
    ovs-vsctl add-port br-proxy proxy-br-ex
    Cheers

  18. Hi guys could use some help here if you have time! Basic: I have 3 NUCs where I am installing openstack on. One is controller, one is network and the other compute. The NUCs only have one interface each wherein my problem lies! What I have done is create VLANs for each interface on the NUCs. So for instance on the network host I have 3 VLANs. One for management, one for instance tunnels and one is the “unnumbered” interface for the br-ex. So pretty much everything works great EXCEPT that I cant get the br-ex to work (so cant ping the public IP or get to internet). I can however bring up an instance and get it working on the tenant network. I have no clue where I am going wrong and can send you my configs if you got some time to look at them I would GREATLY appreciate it.

    1. You are not alone with this problem. Several people updated similar issues. I am planning on a post solely for the purpose of debugging such issues.

      1. Thank you that would be great as I am at a stopping point and cant go any further. Please let us know when it is up. Thanks!

  19. I am trying to configure openstack in two machines and I need you guide
    in setting up multiple interface on single nic; Refer the image in the link to understand
    the network topology https://www.dropbox.com/s/n5d5blv4a26rt3c/OpenStack_Network.bmp?dl=0
    that I have configured.

    I am facing issue in bringing up additional network (Say Mgmt or data) in the same interface (eth0)
    which was initially configured for external network.

    Now, I was able to ping from host machine A and B using external network but if I configure
    br-eth1 to 10.20.0.15 in host A and 10.20.0.16 in host B; I was not able to ping them
    I tried running ‘tcpdump -i eth0 -v icmp’ in hostA while pining host B (10.20.0.16) and I could not
    see any packet in eth0; Is I configured something wrong? Please guide me on this. Thanks

  20. First of all, thanks for your detail setup steps.
    I followed your steps to install Icdhouse in single node with single NIC environment.
    After all, i can launch instances and it works like a charm. But the only problem is “VM instance
    can’t access external network”. The network topology can refer to this image
    (https://www.dropbox.com/s/c66n7j3wg1miaog/network-topology.png?dl=0).
    VM1 and VM2 can ping each other successfully but can’t ping outside world even the
    external gateway (192.168.1.1). VM3 is attached to external network directly, but it still can’t access
    outside world either (even fail to ping 192.168.1.1).

    Could you help to resolve this issue? and which information should I provide to help to dig into this?

    1. 1. Can your instances ping 192.168.2.1
      2. Your VMS technically should not be reachable from external network. Instead you should assign floating ip to the VMS from the external network. Your VMS should be reachable only using that floating ip.
      3. VM3 is not reachable from external network. I am not sure if this is valid. Because the external network serves the purpose of bringing the vm traffic out of openstack. I have not tried launching vms in it. Let me try and get back. Meanwhile you try if 1 and 2 work for you.

      1. The result of action 1 & 2 are listed below:
        1. VM1 & VM2 can ping 192.168.2.1
        VM1 & VM2 can ping 192.168.1.100
        VM1 & VM2 can’t ping 192.168.1.1
        VM3 can’t ping 192.168.2.1
        VM3 can’t ping 192.168.1.100
        VM3 can’t ping 192.168.1.1
        2. I associate a floating IP (192.168.1.101) for VM1, but i still can’t reach VM1 from external network.

    2. Please do check if eth0 is set in promiscous mode. Check if your external gateway can ping 192.168.1.100. If no Then we have to use tcpdump to check where exactly the packet is getting dropped. Start a continuous ping from the external gateway to your instance’s floating ip. Then issue tcpdump on each of the below interfaces and tell me on which interfaces you see the icmp echo request message and on which you dont.
      1. eth0
      2. The interface corresponding to the port connecting openstack router to the external network. This interface will exists inside the routers namespace. To execute tcpdump inside the namespace you have to use ‘ip netns exec tcpdump -lennvi
      3. phy-br-ex
      4. int-br-ex
      I know its tough to follow. specially if you are a beginner. But you have to help me to help you.

      1. tcpdump result:
        Case 1: Ping 192.168.1.1 from VM
        [tapxxxx] & [qbrxxxx] & [qvbxxxx] & [qvoxxxx]
        22:24:16.462247 IP 192.168.2.10 > 192.168.1.1: ICMP echo request, id 28417, seq 0, length 64
        22:24:17.464266 IP 192.168.2.10 > 192.168.1.1: ICMP echo request, id 28417, seq 1, length 64
        22:24:18.464738 IP 192.168.2.10 > 192.168.1.1: ICMP echo request, id 28417, seq 2, length 64
        22:24:19.461006 IP 192.168.1.101 > 192.168.2.10: ICMP host 192.168.1.1 unreachable, length 92
        22:24:19.461302 IP 192.168.1.101 > 192.168.2.10: ICMP host 192.168.1.1 unreachable, length 92
        22:24:19.461313 IP 192.168.1.101 > 192.168.2.10: ICMP host 192.168.1.1 unreachable, length 92
        [br-int]:
        (no any packages related to 192.168.1.101)

        Case 2: Ping 192.168.1.101 (VM floating IP) from PC
        [br-eth0]
        22:17:15.594347 ARP, Request who-has 192.168.1.101 tell 192.168.1.31, length 28
        22:17:16.592029 ARP, Request who-has 192.168.1.101 tell 192.168.1.31, length 28
        22:17:18.610343 ARP, Request who-has 192.168.1.101 tell 192.168.1.31, length 28
        (no arp reply)
        [proxy-br-ex]
        (no any packages related to 192.168.1.101)
        [proxy-br-eth1]
        (no any packages related to 192.168.1.101)

        And here is my “ovs-vsctl show” result:
        Bridge br-int
        fail_mode: secure
        Port int-br-ex
        Interface int-br-ex
        Port br-int
        Interface br-int
        type: internal
        Port “qvoa9c3c3c0-9b”
        tag: 1
        Interface “qvoa9c3c3c0-9b”
        Port “qr-2adafd77-8b”
        tag: 1
        Interface “qr-2adafd77-8b”
        type: internal
        Port “tapb9df269f-ec”
        tag: 1
        Interface “tapb9df269f-ec”
        type: internal
        Port “int-br-eth1”
        Interface “int-br-eth1”
        Bridge “br-eth0”
        Port “br-eth0”
        Interface “br-eth0”
        type: internal
        Port “eth0”
        Interface “eth0”
        Port “phy-br-eth0”
        Interface “phy-br-eth0”
        Port proxy-br-ex
        Interface proxy-br-ex
        Port “proxy-br-eth1”
        Interface “proxy-br-eth1”
        Bridge br-ex
        Port br-ex
        Interface br-ex
        type: internal
        Port “qg-2648325b-6e”
        Interface “qg-2648325b-6e”
        type: internal
        Port phy-br-ex
        Interface phy-br-ex
        Port ex-br-proxy
        Interface ex-br-proxy
        Bridge “br-eth1”
        Port “phy-br-eth1”
        Interface “phy-br-eth1”
        Port “br-eth1”
        Interface “br-eth1”
        type: internal
        Port “eth1-br-proxy”
        Interface “eth1-br-proxy”
        ovs_version: “2.0.1”

  21. Hi Akilesh,
    We are working on openstack havana (ubuntu 12.04 LTS) 2 node setup. Where 1 node is controller + network node and other node is compute node. Both nodes are having 1 NIC. We are using GRE tunnelling mechanism as our network plugin. I’m facing issue while launching the Virtual instance from controller. Error: “Connection to Neutron failed: Maximum attempts reached”. Seems to be a networking issue. Could you please help us in this regard. Please give us an idea how should we configure the network.
    Regrads,
    Jagadeesh

    1. This is not a neutron issue. The error occurs when nova fails to contact neutron server for vif creation prior to instance creation. Issue ‘keystone catalog’ command on controller node and check if the url for the neutron endpoint is reachable from controller. You have to follow the exact instructions given under section ‘Using the same Interface for all Networks’, except that the compute node need not create ‘br-ex’.

      1. Hey Akilesh,
        Thanks for your quick response. Regarding instructions under section ‘Using the Same Interface for all Networks’, they seems to be for VLAN OVS plug-in. I’m using GRE OVS plug-in, is the instructions are same in my case also ? If not, please provide me the instructions. The reason I’m asking I haven’t configured ‘network_vlan_ranges’, ‘bridge_mapping’ etc in conf files. I just followed openstack havana installation guide and just made/enable settings relevant to GRE plug-in only. Please clarify.

        Also, below is content in n/w interface file of my controller+network node (/etc/network/interface). Note that i’m running openstack on ubuntu 12.04 LTS.

        auto lo
        iface lo inet loopback

        auth eth0
        iface eth0 inet manual
        up ifconfig $IFACE 0.0.0.0 up
        up ifconfig $IFACE promisc on
        down ip link set $IFACE promisc off
        down ifconfig $IFACE down

        auto br-ex
        iface br-ex inet static
        address 192.168.1.53
        netmask 255.255.255.0
        gateway 192.168.1.1

        The same(above) is for compute node also (just ip address is different)
        As per installation guide i’ve created 2 bridges br-ex and br-int on both nodes. br-tun is internally created by plug-in. I’ve added eth0 as port to br-ex bridge on both nodes.

        I’ve executed ‘keystone catalog’ command and I can see end-point of neutron (service: network) as “publicURL http://controller:9696“. I think it is proper. Also in the previous comment you mentioned to check the url is reachable from controller, how to do that ?

        I’m struck !!! no clue what to do, that’s why i’ve provide the information which I think might give you more idea about my case. Please guide me in this regard. Appreciate your response. Thanks.

    2. The only difference is that you need not create br-eth1, no need for defining bridge_mappings. You only have to define local_ip in ml2_conf.ini, which I believe you have done. I missed to read that you are using gre mode, but still what you have done is already correct. As I said earlier, this error occurs when nova can not reach neutron. your keystone catalog has returned http://controller:9696/. Make sure the name controller is resolvable by all your nodes. Or add an entry in /etc/hosts. By reachable I mean you should be able to ping controller, your neutron-server is running(test using service neutron-server status) and also it is listening on port 9696(test using netstat -tulnp)

      1. Since we are using GRE plugin .. I have concern regarding the commnds you have mention above,
        ip link add proxy-br-eth1 type veth peer name eth1-br-proxy
        ip link add proxy-br-ex type veth peer name ex-br-proxy
        I believe veth is for VLAN plug-in.
        If possible, pleaes provide me the set of command which i should run with GRE plug-in like, what bridges i’ve to create and the ports/proxy which i’ve to attach.

      2. I will upload the an command and a supporting image some time on 1st of september. check them.

  22. Hi! I need some confirmation does data network need to have an IP address or not? as I could not get it. Yours is just a single NICs which ‘I think would’ not matter of putting the ip. Or do I need to put also ip for br-ex? I know it is on multiple NICs which probably solved as you are now dealing with single NIC. Hope you can show up your configuration if possible.
    Ehwan
    Unlucky neutron user

    1. Rephrasing your query as does the interface/bridge connecting to data network or external network need an ip? No it doesn’t. You just have to set the interface up and in promiscuous mode. The interface connecting to internal network alone needs an ip, because that will alone be used by host. The others are used by openstack for laying the overlay network. Hope its clear.

    2. When using open source products you should not consider yourselves unlucky. Its a privilege to share the burden and success of the community.

  23. HI,
    I have single m/c with single NIC & i want to assign IP to openstack VM from my LAN via(dhcp/static) not therought the insternal DHCP. Do you have any idea hot to disable internal DHCP & assign IP of LAN network.

    1. Having a single or multi nic is not an issue here. You can use my info to install a working openstack in single machine. After this you have to choose one of the following
      1. Create all your instances in external network directly, instead of creating it in a private network.
      2. configure dnsmasq(in whichever machine hosting neutron-dhcp-agent) to run as relay agent instead as server mode. Doing so is beyond openstack. You should modify the source code of neutron-dhcp-agent(this is the service that starts dnsmasq processes, that serve as dhcp servers) to do this.

  24. Your article give me an idea of my problem which is an awesome article. But I have query regarding with 2 NICs, I know this article using one as I confused and need you’re insights. Do I need not to run the ip link add …? Based on my openstack, the vms can get IP using flat network, but using ml2/vlan could not reach, it seems dhcp could not get through vms. I think the problem is somewhere here.

    1. This article is for people who want to install openstack on single machine with single nic. There are many ways in which a person can configure openstack. You have to give me more information regarding your set up. are you doing single node or multi node? If you have 2 nics you could use one nic for both external and internal network use another for data network. Asumin you have single machine with 2 nics I would do
      ovs-vsctl add-br br-ex
      ovs-vsctl add-br br-eth1
      ovs-vsctl add-port br-eth1 eth1 #this is for data network
      ovs-vsctl add-port br-ex eth0 # this is for external network. but now eth0 can not be used by host. so give
      ifconfig br-ex up # eth0’s ip address to br-ex. Now both external and internal network use eth0
      No need of creating veth pairs.
      Then do ‘ip link set up promisc on’ on all interfaces involved.

      1. Yes, I am doing a multi-nodes (controller, network and 1 compute node). I will give this a try that is what I done except for adding the ‘ip link set up promisc on’ based on my assumption it was only br-ex will be in promisc as in the documentations.. Your article is the missing link. lol, thanks a lot for sharing your knowledge. It could save lot of NICs. Using this one interface do I need to enable the net.ipv4.ip_forward=1 in sysctl.conf?

      2. Just verification with the “ifconfig br-ex up” as you said eth0’s ip address to br-ex, meaning we have to remove the eth0 address? as mine set on static ip 192.168.1.100 or do we need to have a new IP for example? which also in contrasting to one of your reply to Ehwan that states “external network need an ip? No it doesnā€™t”. Now confused.

    2. You have to do ‘net.ipv4.ip_forward=1’ only on the network node, because that is where your neutron-l3-agent will be running and that is where virtual routers will exists. On the controller you do not need any bridges, just eth0 with normal configuration is enough. The compute node requires just two interfaces, one for data network(you add eth1 to br-eth1) and another for internal network (use eth0, no need to add any bridges other than br-eth1 and br-int). Only on the network node you would need three interfaces eth0 to use for internal network, eth1 to add to br-eth1 and use for data network, and eth2 to add to br-ex and use for external network. If you combine eth0 and eth2 functionality as I said in my previous reply you should be fine.

    3. If your external, data and internal network do not share the same NIC, only the NIC connecting to internal network need have the IP address. my suggestion to you was to share eth0 between external and internal network which is why I asked you to remove eth0’s ip address, add eth0 to br-ex and finally assign eth0’s ip address to br-ex. I am planning to upload an Image to my post that would make things clear. So hang on if you do not understand.

      1. The image gave me a clear view of what really you’re writing and it was very helpful to understand. Yes I got what you mean, thanks for that information. Is it possible to make a configuration file on the network? so it will not run the ovs-vsctl. As some article mentioned to put the ovs as the device type, example for fedora to put in br-ex
        DEVICE=eth2
        DEVICETYPE=ovs
        TYPE=OVSPort
        ONBOOT=yes
        OVSBOOTPROTO=none
        OVS_BRIDGE=br-eth2

  25. Actually azure provides only one NIC. It does not provide more than one NIC. I am not a networking guy. Can you help me, how to resolve the issue.

    1. Skip the below two steps from my post
      ovs-vsctl add-port br-eth0 eth0
      ifconfig br-eth0 up
      Do the rest.
      Edit your /etc/network/interfaces file to look like below and restart your instance. That should add eth0 to br-eth0 and then set the dhcp assigned ip address to br-eth0. Come back if anything goes wrong.

      auto eth0
      iface eth0 inet manual
      auto br-eth0
      iface br-eth0 inet dhcp
      bridge_ports eth0

  26. I am trying to install openstack in single node on ubuntu 12.04 VM on azure. It has single NIC so I followed above article to set bridges etc. But after that networking of my vm breaks and I have to shutdown vm. I tried 4 times and every time the result is same. Please suggest.

    1. Those instructions work when you have direct keyboard access to the host, not when the host is remote and certainly not when host itself is virtualized. If your host is virtual then why not add more NICs. It is possible in aws and also in openstack. I am not sure of azure though.

  27. You are correct. What I meant is that there is going to be a significant reduction in the necessity to know the networking stuff. Only those in core business need to know. Which is good anyway. We can concentrate on things of our own interest as you say.

  28. Actually, thats not entirely true and thats not the right perspective to look at cloud. IMHO, cloud enables people who don’t and shouldn’t need to know abt networking from a business, end-user perspective. network internals will still be a sought after area but by those people who want to make a career out of it and who have the interest/passion.. bcos for neutron to work and to implement it.. n/wing knowledge is a must. So in short, cloud helps those who want to use n/wing but don’t want to know the internals of it šŸ™‚ My 2 cents

  29. Thanks for the nice article.. hoping to read more abt networking basics that are pre-req to understand and use neutron for a non-networking guy like me šŸ™‚ – deepak

    1. Neutron is for non-networking guys actually. One bad thing about cloud is that it will slowly root out the necessity of knowledge in the area of basic network and server administration. Anyways, I’ll think about a post on linux bridging and networking in general.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s