Install Ubuntu with partitioning scheme as per your requirements.
Note: Run all the commands as super-user. We assume that the IP of the Single machine is 10.0.0.1.
Configure the repositories and update the packages.
This step is needed only if the OS is Ubuntu 12.04. You can skip the repository configuration if the OS is Ubuntu 14.04
apt-get install -y python-software-properties add-apt-repository cloud-archive:icehouse apt-get update && apt-get -y upgrade
Note: Reboot is needed only if kernel is updated
reboot
Support packages
RaabitMQ server
apt-get install -y rabbitmq-server
MySQL server
Install MySQL server and related software
apt-get install -y mysql-server python-mysqldb
Edit the following lines in /etc/mysql/my.cnf
bind-address = 0.0.0.0 [mysqld] ... collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8
Restart MySQL service
service mysql restart
Other Support Packages
apt-get install -y ntp vlan bridge-utils
Edit the following lines in the file /etc/sysctl.conf
net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0
Load the values
sysctl -p
Keystone
Install keystone
apt-get install -y keystone
Create mysql database named keystone and add credentials
mysql -u root -p mysql> CREATE DATABASE keystone; mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone_dbpass'; mysql> quit
Edit the file /etc/keystone/keystone.conf. Comment the following line
connection = sqlite:////var/lib/keystone/keystone.db
and add the line
connection = mysql://keystone:keystone_dbpass@10.0.0.1/keystone
Restart the keystone service and sync the database
service keystone restart keystone-manage db_sync
Export the variable to run initial keystone commands
export OS_SERVICE_TOKEN=ADMIN export OS_SERVICE_ENDPOINT=http://10.0.0.1:35357/v2.0
Create admin user, admin tenant, admin role and service tenant. Also add admin user to admin tenant and admin role.
keystone tenant-create --name=admin --description="Admin Tenant" keystone tenant-create --name=service --description="Service Tenant" keystone user-create --name=admin --pass=ADMIN --email=admin@example.com keystone role-create --name=admin keystone user-role-add --user=admin --tenant=admin --role=admin
Create keystone service
keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
Create keystone endpoint
keystone endpoint-create --service=keystone --publicurl=http://10.0.0.1:5000/v2.0 --internalurl=http://10.0.0.1:5000/v2.0 --adminurl=http://10.0.0.1:35357/v2.0
Unset the exported values
unset OS_SERVICE_TOKEN unset OS_SERVICE_ENDPOINT
Create a file named creds and add the following lines
export OS_USERNAME=admin export OS_PASSWORD=ADMIN export OS_TENANT_NAME=admin export OS_AUTH_URL=http://10.0.0.1:35357/v2.0
Source the file
source creds
Test the keysone setup
keystone token-get keystone user-list
Glance (Image Store)
Install Glance
apt-get install -y glance
Create database and credentials for Glance
mysql -u root -p CREATE DATABASE glance; GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance_dbpass'; quit;
Create glance related keystone entries
keystone user-create --name=glance --pass=glance_pass --email=glance@example.com keystone user-role-add --user=glance --tenant=service --role=admin keystone service-create --name=glance --type=image --description="Glance Image Service" keystone endpoint-create --service=glance --publicurl=http://10.0.0.1:9292 --internalurl=http://10.0.0.1:9292 --adminurl=http://10.0.0.1:9292
Edit /etc/glance/glance-api.conf and edit the following lines
# sqlite_db = /var/lib/glance/glance.sqlite connection = mysql://glance:glance_dbpass@10.0.0.1/glance [keystone_authtoken] auth_host = 10.0.0.1 auth_port = 5000 auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = glance_pass [paste_deploy] flavor = keystone
Edit /etc/glance/glance-registry.conf and edit the following lines as below
# sqlite_db = /var/lib/glance/glance.sqlite connection = mysql://glance:glance_dbpass@10.0.0.1/glance [keystone_authtoken] auth_host = 10.0.0.1 auth_port = 5000 auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = glance_pass [paste_deploy] flavor = keystone
Restart Glance services
service glance-api restart service glance-registry restart
Sync the database
glance-manage db_sync
Download a pre-bundled image for testing
glance image-create --name Cirros --is-public true --container-format bare --disk-format qcow2 --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img glance index
Nova(Compute)
Install the Nova services
apt-get install -y nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient nova-compute nova-console
Create database and credentials for Nova
mysql -u root -p mysql> CREATE DATABASE nova; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova_dbpass'; mysql> quit
Create Keystone entries for Nova
keystone user-create --name=nova --pass=nova_pass --email=nova@example.com keystone user-role-add --user=nova --tenant=service --role=admin keystone service-create --name=nova --type=compute --description="OpenStack Compute" keystone endpoint-create --service=nova --publicurl=http://10.0.0.1:8774/v2/%\(tenant_id\)s --internalurl=http://10.0.0.1:8774/v2/%\(tenant_id\)s --adminurl=http://10.0.0.1:8774/v2/%\(tenant_id\)s
Open /etc/nova/nova.conf and edit the file as follows
[DEFAULT] logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True iscsi_helper=tgtadm libvirt_use_virtio_for_bridges=True connection_type=libvirt root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf verbose=True rpc_backend = nova.rpc.impl_kombu rabbit_host = 10.0.0.1 my_ip = 10.0.0.1 vncserver_listen = 10.0.0.1 vncserver_proxyclient_address = 10.0.0.1 novncproxy_base_url=http://10.0.0.1:6080/vnc_auto.html glance_host = 10.0.0.1 auth_strategy=keystone network_api_class=nova.network.neutronv2.api.API neutron_url=http://10.0.0.1:9696 neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_admin_username=neutron neutron_admin_password=neutron_pass neutron_metadata_proxy_shared_secret=openstack neutron_admin_auth_url=http://10.0.0.1:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver=nova.virt.firewall.NoopFirewallDriver security_group_api=neutron vif_plugging_is_fatal: false vif_plugging_timeout: 0 [database] connection = mysql://nova:nova_dbpass@10.0.0.1/nova [keystone_authtoken] auth_uri = http://10.0.0.1:5000 auth_host = 10.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova_pass
sync the Nova db
nova-manage db sync
Restart all nova services
service nova-api restart ;service nova-cert restart; service nova-consoleauth restart ;service nova-scheduler restart;service nova-conductor restart; service nova-novncproxy restart; service nova-compute restart; service nova-console restart
Test the Nova installation using the following command
nova-manage service list
The output should be something like this
Binary Host Zone Status State Updated_At nova-consoleauth ubuntu internal enabled :-) 2014-04-19 08:55:13 nova-conductor ubuntu internal enabled :-) 2014-04-19 08:55:14 nova-cert ubuntu internal enabled :-) 2014-04-19 08:55:13 nova-scheduler ubuntu internal enabled :-) 2014-04-19 08:55:13 nova-compute ubuntu nova enabled :-) 2014-04-19 08:55:14 nova-console ubuntu internal enabled :-) 2014-04-19 08:55:14
Also run the following command to check if nova is able to authenticate with keystone server
nova list
Neutron(Networking service)
Install the Neutron services
apt-get install -y neutron-server neutron-plugin-openvswitch neutron-plugin-openvswitch-agent neutron-common neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent openvswitch-switch
Create database and credentials for Neutron
mysql -u root -p CREATE DATABASE neutron; GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron_dbpass'; quit;
Create Keystone entries for Neutron
keystone user-create --name=neutron --pass=neutron_pass --email=neutron@example.com keystone service-create --name=neutron --type=network --description="OpenStack Networking" keystone user-role-add --user=neutron --tenant=service --role=admin keystone endpoint-create --service=neutron --publicurl http://10.0.0.1:9696 --adminurl http://10.0.0.1:9696 --internalurl http://10.0.0.1:9696
Edit /etc/neutron/neutron.conf
[DEFAULT] core_plugin = ml2 notification_driver=neutron.openstack.common.notifier.rpc_notifier verbose=True rabbit_host=10.0.0.1 rpc_backend=neutron.openstack.common.rpc.impl_kombu service_plugins=router allow_overlapping_ips=True auth_strategy=keystone neutron_metadata_proxy_shared_secret=openstack service_neutron_metadata_proxy=True nova_admin_password=nova_pass notify_nova_on_port_data_changes=True notify_nova_on_port_status_changes=True nova_admin_auth_url=http://10.0.0.1:35357/v2.0 nova_admin_tenant_id=service nova_url=http://10.0.0.1:8774/v2 nova_admin_username=nova [keystone_authtoken] auth_host = 10.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = neutron_pass signing_dir = $state_path/keystone-signing rpc_backend = neutron.openstack.common.rpc.impl_kombu rabbit_host = 10.0.0.1 rabbit_port = 5672 notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://10.0.0.1:8774 nova_admin_username = nova nova_admin_tenant_id = nova_admin_password = nova_pass nova_admin_auth_url = http://10.0.0.1:35357/v2.0 [database] connection = mysql://neutron:neutron_dbpass@10.0.0.1/neutron [agent] root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
Open /etc/neutron/plugins/ml2/ml2_conf.ini and make the following changes
[ml2] type_drivers=flat,vlan tenant_network_types=vlan,flat mechanism_drivers=openvswitch [ml2_type_flat] flat_networks=External [ml2_type_vlan] network_vlan_ranges=Intnet1:100:200 [ml2_type_gre] [ml2_type_vxlan] [securitygroup] firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group=True [ovs] bridge_mappings=External:br-ex,Intnet1:br-eth1
We have created two physical networks one as a flat network and the other as a vlan network with vlan ranging from 100 to 200.
We have mapped External network to br-ex and Intnet1 to br-eth1.
Now Create bridges
Note: The naming convention for the ethernet cards may also be like “p4p1”, “em1” in Ubuntu 14.04. You can use the appropriate interface names below instead of “eth1” and “eth2”.
ovs-vsctl add-br br-int ovs-vsctl add-br br-eth1 ovs-vsctl add-br br-ex ovs-vsctl add-port br-eth1 eth1 ovs-vsctl add-port br-ex eth2
According to our set up all traffic belonging to External network will be bridged to eth2 and all traffic of Intnet1 will be bridged to eth1.
If you have only one interface(eth0) and would like to use it for all networking then please have a look at https://fosskb.in/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface.
Edit /etc/neutron/metadata_agent.ini to look like this
[DEFAULT] auth_url = http://10.0.0.1:5000/v2.0 auth_region = RegionOne admin_tenant_name = service admin_user = neutron admin_password = neutron_pass metadata_proxy_shared_secret = metadata_pass
Edit /etc/neutron/dhcp_agent.ini to look like this
[DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq use_namespaces = True
Edit /etc/neutron/l3_agent.ini to look like this
[DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True
Restart all Neutron services
service neutron-server restart; service neutron-plugin-openvswitch-agent restart;service neutron-metadata-agent restart; service neutron-dhcp-agent restart; service neutron-l3-agent restart
Check if the services are running. Run the following command
neutron agent-list
The output should be like
+--------------------------------------+--------------------+--------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+--------+-------+----------------+ | 01a5e70c-324a-4183-9652-6cc0e5c98499 | Metadata agent | ubuntu | :-) | True | | 17b9440b-50eb-48b7-80a8-a5bbabc47805 | DHCP agent | ubuntu | :-) | True | | c30869f2-aaca-4118-829d-a28c63a27aa4 | L3 agent | ubuntu | :-) | True | | f846440e-4ca6-4120-abe1-ffddaf1ab555 | Open vSwitch agent | ubuntu | :-) | True | +--------------------------------------+--------------------+--------+-------+----------------+
Users who want to know what happens under the hood can read
- How neutron-openvswitch-agent provides L2 connectivity between Instances, DHCP servers and routers
- How neutron-l3-agent provides services like routing, natting, floatingIP and security groups
- See more of Linux networking capabilities
Horizon (OpenStack Dashboard)
apt-get install -y openstack-dashboard
After installing login using the following credentials
URL : http://10.0.0.1/horizon Username: admin Password: ADMIN
For an automated OpenStack install, please check OpenStack using SaltStack.
Use the following link to get started with the first instance on OpenStack.
Procedure to get started with the first instance on OpenStack
keystone-manage db_sync
2016-05-16 13:11:15.569 9101 CRITICAL keystone [-] OperationalError: (OperationalError) (2003, “Can’t connect to MySQL server on ‘10.0.0.1’ (110)”) None None .. ?? help anyone
Hi this may sound very lazy but im really having big problems installing this does anyone here know a place where i can download a machine with openstack already installed in it and just run it on vmware workstation ?
Hello!
First, thank you for this very interesting tuto. I have followed all the steps and even if I can create instances and start then I can’t access to metadata service from them (169.254.169.254), I created a route in instances and configure the security settings to permit the correct accesses but the only thing I obtain it is a internal server error
can you paste a bit of your metadata agent log (it will be inside /var/log/neutron/ ) in pastebin and give us the url.
Thank you Akilesh, you was right and I had just to look after the logs… Very simple in fact. In the logs, I had this error: 2015-05-20 09:15:07.642 6808 TRACE neutron.agent.metadata.agent EndpointNotFound: Could not find Service or Region in Service Catalog.
So I check the endpoint-list (with keystone) and I find that region was not RegionOne but regionOne. So I have changed the “auth_region =” entry in metadata_agent.ini from RegionOne to regionOne and it’s work.
By the way, just another question: is it normal that, if I want my instances was correctly automatically configured with the 169.254.169.254 route I must set the options:
enable_isolated_metadata =
enable_metadata_network =
To True ?
Thank you again.
Did not understand your last question. As long as your instances have picked up the IP assigned to it by dhcp agent and metadata agent is up and running. your instances can reach 169.254.169.254.
did you install cider??
Volumes menu is not shown in the console. What to do?
Thmk you for the tut…After a restart the services stop… I tried restarting all the services but some error occur in
neutron agent-list.. Please help
in my case, I need to add the following to /etc/neutron/neutron.conf to make it work
[DEFAUL]
…
lock_path = $state_path/lock
Hi,
once I install neutron and add bridges I am not able to ping host ip from other pc on the same network.
my set up is on single node with ip 192.168.1.15
but after neutron installation and bridges I am not able to ping from ip like 192.168.1.10.
Pinging 192.168.1.15 with 32 bytes of data:
Request timed out.
Request timed out.
Hi
If you have 2 nics, you would have to attach the bridge to the nic other than the one having the IP 192.168.1.15.
If you have only one nic kindly use the following procedure.
https://fosskb.wordpress.com/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface/
Hello
I have a problem. If i dont want to disable openssh, following above insllation way, i need have 3 NIC (eth0 : 10.0.0.1 ; eth1: private network; eth2: public network), right?
Thank you
I can not understand the relevance of openssh here.
hello
I met same issue like this. To dont meet this problem, have i minimum 3 NICs (eth0 for 10.0.0.1 ; eth1 for Private network and eth2 for Public network) ? Help me explain clearly.
Thank you
We are unable to understand your problem. If you are installing openstack, It is recomended to have machines with two nics to use as compute node and machines with three nic as network node. But you can still manage to install openstack on a single machine with single nic.
Hi akilesh1597 brother
My problem is :
My server have 2 NICs : eth2 has IP : 10.0.0.1 and connects with ext network – eth1 connects with int network.
After installating , I can not access with 10.0.0.1 by using openssh. I think i have to specific NIC with fixed IP to using for SSH. This is reason why i ask about using 3 NICs on this problem. Can you have any idea ?
Thank you
Actually, I use remote PC to install . my public IP forward to 10.0.0.1 to access server via ssh. As you know, some guys also can not access ssh after neutron installing.
on running this command :
nova list
it shows
ERROR: Unauthorized (HTTP 401)
Please Help me on this issue
I have posted the contents of nova.conf please help me out.
Now when iam running the command:
nova list
it shows
“ERROR: You must provide a username via either –os-username or env[OS_USERNAME]”
Hi
Did you create a file named creds? Did you source the file?
yes i did that and after that when i run the command
nova list
it shows the error :
ERROR: Unauthorized (HTTP 401)
on running this command :
nova list
it shows
ERROR: Unauthorized (HTTP 401)
Please Help me on this issue….
Hi
Can you paste your nova.conf here?
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 10.1.105.30
my_ip=10.1.105.30
vncserver_listen = 10.1.105.30
vncserver_proxyclient_address = 10.1.105.30
novncproxy_base_url=http://10.1.105.30:6080/vnc_auto.html
glance_host = 10.1.105.30
auth_strategy = keystone
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.1.105.30:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron_pass
neutron_metadata_proxy_shared_secret=openstack
neutron_admin_auth_url=http://10.1.105.30:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
vif_plugging_is_fatal: false
vif_plugging_timeout: 0
[database]
connection = mysql://nova:nova_dbpass@10.1.105.30/nova
[keystone_authtoken]
auth_uri = http://10.1.105.30:5000
auth_host = 10.1.105.30
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova_pass
check for the bind-ip address in my.cnf. It would be 127.0.0.1, please replace that with your controller ip.
I made the changes in the /etc/nova/nova.conf file as given in the steps and then nova-manage db sync doesnt work and shows:
usage: nova-manage [-h] [–config-dir DIR] [–config-file PATH] [–debug]
[–log-config-append PATH] [–log-date-format DATE_FORMAT]
[–log-dir LOG_DIR] [–log-file PATH] [–log-format FORMAT]
[–nodebug] [–nouse-syslog] [–nouse-syslog-rfc-format]
[–noverbose] [–syslog-log-facility SYSLOG_LOG_FACILITY]
[–use-syslog] [–use-syslog-rfc-format] [–verbose]
[–version] [–remote_debug-host REMOTE_DEBUG_HOST]
[–remote_debug-port REMOTE_DEBUG_PORT]
{version,bash-completion,shell,logs,db,vm,agent,host,flavor,vpn,floating,project,account,network,service,cell,fixed}
…
nova-manage: error: argument category: invalid choice: ‘db_sync’ (choose from ‘version’, ‘bash-completion’, ‘shell’, ‘logs’, ‘db’, ‘vm’, ‘agent’, ‘host’, ‘flavor’, ‘vpn’, ‘floating’, ‘project’, ‘account’, ‘network’, ‘service’, ‘cell’, ‘fixed’)
and then i tried :
nova-manage service list
2014-09-16 12:00:28.195 15926 WARNING nova.openstack.common.db.sqlalchemy.session [-] SQL connection failed. 10 attempts left.
Thanks but after keystone work done so before verify (keystone token-get
keystone user-list) run “unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT”
Thanks for this tut.
Hi All,
My neutron Ml2 setup with GRE is working fine , Instances are getting IP and everyhtinig isi fine , just that the metadat service is not working , can anyone has idea what to do in the same.
Thanks a lot Johnson.
After completing the whole installation, The horizon dashboard throws the below mentioned error.
*****************************
Something went wrong!
An unexpected error has occurred. Try refreshing the page. If that doesn’t help, contact your local administrator.
******************************
Could you please help me resolving this issue, Thanks in advance.
HI,
neutron agent-list returns authentication required.
kindly assist
root@stack:/# keystone tenant-create –name=admin –description=”Admin Tenant”
Invalid OpenStack Identity credentials.
Solved by changing admin token in keystone.conf to “ADMIN”
Hello Roman I am installing openstack on Ubuntu 14.04 and got the same Invalid OpenStack Identity credentials error, changed the token to ADMIN , resync the database and restarted the keystone service , still getting the same error , can you please help
nova compute not showing in nova manage service list
i checked in nova.conf, it was proper.
root@icehouse:~# nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert icehouse internal enabled 🙂 2014-07-25 11:17:01
nova-scheduler icehouse internal enabled 🙂 2014-07-25 11:17:02
nova-consoleauth icehouse internal enabled 🙂 2014-07-25 11:17:02
nova-console icehouse internal enabled 🙂 2014-07-25 11:17:02
nova-conductor icehouse internal enabled 🙂 2014-07-25 11:17:02
Regards,
RHK
Hi, This was very helpful. Can you please also do a tutorial of icehouse installation on multi-nodes for Ubuntu 12.04 LTS?
neutron agent-list returns “connection to neutron failed : Maximum attempts reached. Single node , single NIC setup. https://fosskb.wordpress.com/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface/ was referenced for single NIC setup. Can you help fix this?
possible causes
1. the machine from which you are accessing can not reach your openstack setup
2. neutron server may not be running and could have crashed for some reason. you can look into the log file and provide any error messages you see.
3. The bridge interfaces may not be in promiscuous mode
please provide the result of
1. ‘service neutron-server status’
2. ‘ps -ef | grep neutron-server’
3. ‘netstat -tulnp | grep 9696’
4. ‘ip -d addr’
5. ‘ip -d link show’
6. ‘ovs-vsctl show’
7. ‘ovs-ofctl dump-flows br-int’
8. ‘ovs-ofctl dump-flows br-eth1’
9. ‘ovs-ofctl show br-int’
10. ‘ovs-ofctl show br-eth1’
hi ,
multimode with ml2 plugin ,network is not getting up,may I know vlan setting for multimode?
Below is my config file as it is. Could you please describe ‘network is not getting up’ more specifically? Is it that the instances are not reachable to each other and the router? In that case make sure you have set all the physical interface involved in promiscous mode ‘ip link set up promisc on’. If the issue is specific to multi node then make sure the switch port you are using to connect the nodes are in trunk mode and all vlan allowed and created to use.
[ml2]
type_drivers=flat,vlan
tenant_network_types=vlan,flat
mechanism_drivers=openvswitch
[ml2_type_flat]
flat_networks=External
[ml2_type_vlan]
network_vlan_ranges=Intnet1:100:200
[ml2_type_gre]
[ml2_type_vxlan]
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True
[ovs]
bridge_mappings=External:br-ex,Intnet1:br-eth1
Right now I am getting the error ” Error: Unable to retrieve floating IP addresses” after following your post. Great one btw !!
BTW I only have an eth0 and a wlan0 no eth1 ( since its an old laptop) , but I tried this anyways :
ovs-vsctl add-br br-int
ovs-vsctl add-br br-eth0
ovs-vsctl add-port br-eth0 eth1
Here’s the output of my ifconfig if that’s of any help .. Would appreciate any alternate suggestions to get this working ..
br-eth0 Link encap:Ethernet HWaddr 9a:67:b4:3e:26:48
inet6 addr: fe80::bf:46ff:feec:c367/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)
br-int Link encap:Ethernet HWaddr 26:3c:86:19:0f:45
inet6 addr: fe80::c70:2fff:fe10:28ac/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)
eth0 Link encap:Ethernet HWaddr 3c:97:0e:b3:8a:96
inet addr:10.10.1.71 Bcast:10.10.1.255 Mask:255.255.255.0
inet6 addr: fe80::3e97:eff:feb3:8a96/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7155 errors:0 dropped:0 overruns:0 frame:0
TX packets:3512 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:614170 (614.1 KB) TX bytes:650776 (650.7 KB)
Interrupt:20 Memory:f3a00000-f3a20000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:730537 errors:0 dropped:0 overruns:0 frame:0
TX packets:730537 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:162798173 (162.7 MB) TX bytes:162798173 (162.7 MB)
ovs-system Link encap:Ethernet HWaddr ae:d2:78:47:95:7c
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
virbr0 Link encap:Ethernet HWaddr a2:80:63:95:c1:c7
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
wlan0 Link encap:Ethernet HWaddr 6c:88:14:f5:17:88
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Hi, Please have a look at https://fosskb.wordpress.com/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface.
Regarding the floating ip address error. When do you get it. while doing floatingip-list? Also can you post a few relevant lines from neutron server log.
In bridging you have given the following steps
ovs-vsctl add-br br-int
ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 ethx
x=1,2,3 etc as per the ethernet interface
Can you kindly tell how many interface should we have as i am confused because eth1 is usually the second NIC
You need to attach one interface here. It is usually eth1.
Thanks Johnson, after adding those value my Instance is now provisioning and I can have the console with proper computer except the IP. The virtual machine instance are without IP
Any suggestions ??
Hi
What image are using to provision the instances? If it is the “cirros” image that i have mentioned in the post, Can you try this following command in the CLI of the instance and check if it is getting a DHCP lease?
sudo udhcpc
I used ur image and even official ubuntu openstack image qcow format – as i checked udhcpc on cirros and it is on a continuous loop “Sending discovery … “
The nova.conf should have the neutron and network details under [DEFAULT] otherwise it causes an error
I am on a Virtual Machine, my eth0 is on NAT via which I am using internet and my second NIC eth1 is on Host-only which is IP 10.0.0.1 and overall used this for openstack configuration.
For bridge, I have used this
ovs-vsctl add-br br-int
ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth1
This is my nova.conf
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 10.0.0.1
my_ip = 10.0.0.1
vncserver_listen = 10.0.0.1
vncserver_proxyclient_address = 10.0.0.1
novncproxy_base_url=http://10.0.0.1:6080/vnc_auto.html
glance_host = 10.0.0.1
auth_strategy=keystone
[database]
connection = mysql://nova:nova_dbpass@10.0.0.1/nova
[keystone_authtoken]
auth_uri = http://10.0.0.1:5000
auth_host = 10.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova_pass
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.0.0.1:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron_pass
neutron_admin_auth_url=http://10.0.0.1:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virl.firewall.NoopFirewallDriver
security_group_api=neutron
See if you can find anything wrong with it
Hi Martin
Thanks for your comment. I have rectified the entry now.
Thanks
This is very good piece of information, I have setup the env but still while provisioning an instance it fails with Error 500, quite confused where and what I am doin wrong.
Hi
Sorry. The nova.conf was having some errors. Now the file has been rectified.
Thanks
Thanks Johnson – My problem of instance is solved after updating nova.conf. New problem arrived, while launching instance, i get the following error
nova.compute.manager [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] VirtualInterfaceCreateException: Virtual Interface creation failed
I googled it and found some changes to be done in neutron.conf and when I checked your article, I beleive you have also updated thoses changes, well now I have updated my neutron.conf and restarted neutron services and launched another instance, unfortunately I am still facing the same error of Virtual Interface creation failed.
Any guess on what might be wrong now ?
Can you try adding the following lines under default section like below.
[default]
……………………..
………………………
vif_plugging_is_fatal: false
vif_plugging_timeout: 0