Cloud · Glance · Juno · Keystone · Neutron · nova · OpenStack

OpenStack Juno on Ubuntu 14.04 LTS and 14.10 – Single Machine Setup

Install Ubuntu with partitioning scheme as per your requirements. Note: Run all the commands as super-user. We assume that the IP of the Single machine is 10.0.0.1.

Configure the repositories and update the packages.

This step is needed only if the OS is Ubuntu 14.04 LTS. You can skip the repository configuration if the OS is Ubuntu 14.10

apt-get install ubuntu-cloud-keyring
echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" \
"trusty-updates/juno main" > /etc/apt/sources.list.d/cloudarchive-juno.list
apt-get update && apt-get -y upgrade

Note: Reboot is needed only if kernel is updated

reboot 

Support packages

RaabitMQ server

apt-get install -y rabbitmq-server

Change Password for the user ‘guest’ in the rabbitmq-server

rabbitmqctl change_password guest rabbit

MySQL server

Install MySQL server and related software

apt-get install -y mysql-server python-mysqldb

Edit the following lines in /etc/mysql/my.cnf

bind-address = 0.0.0.0
[mysqld]
...
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

Restart MySQL service

service mysql restart

Other Support Packages

apt-get install -y ntp vlan bridge-utils

Edit the following lines in the file /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

Load the values

sysctl -p

Keystone

Install keystone

apt-get install -y keystone

Create mysql database named keystone and add credentials

mysql -u root -p
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone_dbpass';
mysql> quit

Edit the file /etc/keystone/keystone.conf. Comment the following line

connection = sqlite:////var/lib/keystone/keystone.db

and add the line

connection = mysql://keystone:keystone_dbpass@10.0.0.1/keystone

Restart the keystone service and sync the database

service keystone restart
keystone-manage db_sync

Export the variable to run initial keystone commands

export OS_SERVICE_TOKEN=ADMIN
export OS_SERVICE_ENDPOINT=http://10.0.0.1:35357/v2.0

Create admin user, admin tenant, admin role and service tenant. Also add admin user to admin tenant and admin role.

keystone tenant-create --name=admin --description="Admin Tenant"
keystone tenant-create --name=service --description="Service Tenant"
keystone user-create --name=admin --pass=ADMIN --email=admin@example.com
keystone role-create --name=admin
keystone user-role-add --user=admin --tenant=admin --role=admin

Create keystone service

keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"

Create keystone endpoint

keystone endpoint-create --service=keystone --publicurl=http://10.0.0.1:5000/v2.0 --internalurl=http://10.0.0.1:5000/v2.0 --adminurl=http://10.0.0.1:35357/v2.0

Unset the exported values

unset OS_SERVICE_TOKEN
unset OS_SERVICE_ENDPOINT

Create a file named creds and add the following lines

export OS_USERNAME=admin
export OS_PASSWORD=ADMIN
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://10.0.0.1:35357/v2.0

Source the file

source creds

Test the keysone setup

keystone token-get
keystone user-list

Glance (Image Store)

Install Glance

apt-get install -y glance

Create database and credentials for Glance

mysql -u root -p
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance_dbpass';
quit;

Create glance related keystone entries

keystone user-create --name=glance --pass=glance_pass --email=glance@example.com
keystone user-role-add --user=glance --tenant=service --role=admin
keystone service-create --name=glance --type=image --description="Glance Image Service"
keystone endpoint-create --service=glance --publicurl=http://10.0.0.1:9292 --internalurl=http://10.0.0.1:9292 --adminurl=http://10.0.0.1:9292

Edit /etc/glance/glance-api.conf and edit the following lines

# sqlite_db = /var/lib/glance/glance.sqlite
connection = mysql://glance:glance_dbpass@10.0.0.1/glance

[keystone_authtoken]
auth_uri = http://10.0.0.1:5000/v2.0
identity_uri = http://10.0.0.1:35357
admin_tenant_name = service
admin_user = glance
admin_password = glance_pass

[paste_deploy]
flavor = keystone

Edit /etc/glance/glance-registry.conf and edit the following lines as below

# sqlite_db = /var/lib/glance/glance.sqlite
connection = mysql://glance:glance_dbpass@10.0.0.1/glance

[keystone_authtoken]
auth_uri = http://10.0.0.1:5000/v2.0
identity_uri = http://10.0.0.1:35357
admin_tenant_name = service
admin_user = glance
admin_password = glance_pass

[paste_deploy]
flavor = keystone

Restart Glance services

service glance-api restart
service glance-registry restart

Sync the database

glance-manage db_sync

Download a pre-bundled image for testing

glance image-create --name Cirros --is-public true --container-format bare --disk-format qcow2 --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
glance image-list

Nova(Compute)

Install the Nova services

apt-get install -y nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient nova-compute nova-console

Create database and credentials for Nova

mysql -u root -p
mysql> CREATE DATABASE nova;
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova_dbpass';
mysql> quit

Create Keystone entries for Nova

keystone user-create --name=nova --pass=nova_pass --email=nova@example.com
keystone user-role-add --user=nova --tenant=service --role=admin
keystone service-create --name=nova --type=compute --description="OpenStack Compute"
keystone endpoint-create --service=nova --publicurl=http://10.0.0.1:8774/v2/%\(tenant_id\)s --internalurl=http://10.0.0.1:8774/v2/%\(tenant_id\)s --adminurl=http://10.0.0.1:8774/v2/%\(tenant_id\)s

Open /etc/nova/nova.conf and edit the file as follows

[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 127.0.0.1
rabbit_password = rabbit
my_ip = 10.0.0.1
vncserver_listen = 10.0.0.1
vncserver_proxyclient_address = 10.0.0.1
novncproxy_base_url=http://10.0.0.1:6080/vnc_auto.html
glance_host = 10.0.0.1
auth_strategy=keystone

network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.0.0.1:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron_pass
neutron_admin_auth_url=http://10.0.0.1:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

vif_plugging_is_fatal: false
vif_plugging_timeout: 0

[database]
connection = mysql://nova:nova_dbpass@10.0.0.1/nova

[keystone_authtoken]
auth_uri = http://10.0.0.1:5000
auth_host = 10.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova_pass

[neutron]
service_metadata_proxy = True
metadata_proxy_shared_secret = openstack

sync the Nova db

nova-manage db sync

Restart all nova services

service nova-api restart ;service nova-cert restart; service nova-consoleauth restart ;service nova-scheduler restart;service nova-conductor restart; service nova-novncproxy restart; service nova-compute restart; service nova-console restart

Test the Nova installation using the following command

nova-manage service list

The output should be something like this

Binary           Host                     Zone             Status     State Updated_At
nova-consoleauth ubuntu                   internal         enabled    :-)   2014-04-19 08:55:13
nova-conductor   ubuntu                   internal         enabled    :-)   2014-04-19 08:55:14
nova-cert        ubuntu                   internal         enabled    :-)   2014-04-19 08:55:13
nova-scheduler   ubuntu                   internal         enabled    :-)   2014-04-19 08:55:13
nova-compute     ubuntu                   nova             enabled    :-)   2014-04-19 08:55:14
nova-console     ubuntu                   internal         enabled    :-)   2014-04-19 08:55:14

Also run the following command to check if nova is able to authenticate with keystone server

nova list

Neutron(Networking service)

Install the Neutron services

apt-get install -y neutron-server neutron-plugin-openvswitch neutron-plugin-openvswitch-agent neutron-common neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent openvswitch-switch

Create database and credentials for Neutron

mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron_dbpass';
quit;

Create Keystone entries for Neutron

keystone user-create --name=neutron --pass=neutron_pass --email=neutron@example.com
keystone service-create --name=neutron --type=network --description="OpenStack Networking"
keystone user-role-add --user=neutron --tenant=service --role=admin
keystone endpoint-create --service=neutron --publicurl http://10.0.0.1:9696 --adminurl http://10.0.0.1:9696  --internalurl http://10.0.0.1:9696

Edit /etc/neutron/neutron.conf

[DEFAULT]
lock_path=/var/lock/neutron
core_plugin = ml2
notification_driver=neutron.openstack.common.notifier.rpc_notifier
verbose=True
rpc_backend = rabbit
rabbit_host = 127.0.0.1
rabbit_password = rabbit
service_plugins=router
allow_overlapping_ips=True
auth_strategy=keystone
neutron_metadata_proxy_shared_secret=openstack
service_neutron_metadata_proxy=True
nova_admin_password=nova_pass
notify_nova_on_port_data_changes=True
notify_nova_on_port_status_changes=True
nova_admin_auth_url=http://10.0.0.1:35357/v2.0
nova_admin_tenant_id=service
nova_url=http://10.0.0.1:8774/v2
nova_admin_username=nova


[keystone_authtoken]
auth_host = 10.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = neutron_pass
signing_dir = $state_path/keystone-signing

notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://10.0.0.1:8774
nova_admin_username = nova
nova_admin_tenant_id =
nova_admin_password = nova_pass
nova_admin_auth_url = http://10.0.0.1:35357/v2.0

[database]
connection = mysql://neutron:neutron_dbpass@10.0.0.1/neutron

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

Open /etc/neutron/plugins/ml2/ml2_conf.ini and make the following changes

[ml2]
type_drivers=flat,vlan
tenant_network_types=vlan,flat
mechanism_drivers=openvswitch
[ml2_type_flat]
flat_networks=External
[ml2_type_vlan]
network_vlan_ranges=Intnet1:100:200
[ml2_type_gre]
[ml2_type_vxlan]
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True
[ovs]
bridge_mappings=External:br-ex,Intnet1:br-eth1

We have created two physical networks one as a flat network and the other as a vlan network with vlan ranging from 100 to 200. We have mapped External network to br-ex and Intnet1 to br-eth1. Now Create bridges
Note: The naming convention for the ethernet cards may also be like “p4p1”, “em1” from Ubuntu 14.04 LTS. You can use the appropriate interface names below instead of “eth1” and “eth2”.

ovs-vsctl add-br br-int
ovs-vsctl add-br br-eth1
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-eth1 eth1
ovs-vsctl add-port br-ex eth2

According to our set up all traffic belonging to External network will be bridged to eth2 and all traffic of Intnet1 will be bridged to eth1. If you have only one interface(eth0) and would like to use it for all networking then please have a look at https://fosskb.in/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface.

Edit /etc/neutron/metadata_agent.ini to look like this

[DEFAULT]
auth_url = http://10.0.0.1:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = neutron_pass
metadata_proxy_shared_secret = openstack

Edit /etc/neutron/dhcp_agent.ini to look like this

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True

Edit /etc/neutron/l3_agent.ini to look like this

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True

Sync the db

neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno

Restart all Neutron services

service neutron-server restart; service neutron-plugin-openvswitch-agent restart;service neutron-metadata-agent restart; service neutron-dhcp-agent restart; service neutron-l3-agent restart

Check if the services are running. Run the following command

neutron agent-list

The output should be like

+--------------------------------------+--------------------+--------+-------+----------------+
| id                                   | agent_type         | host   | alive | admin_state_up |
+--------------------------------------+--------------------+--------+-------+----------------+
| 01a5e70c-324a-4183-9652-6cc0e5c98499 | Metadata agent     | ubuntu | :-)   | True           |
| 17b9440b-50eb-48b7-80a8-a5bbabc47805 | DHCP agent         | ubuntu | :-)   | True           |
| c30869f2-aaca-4118-829d-a28c63a27aa4 | L3 agent           | ubuntu | :-)   | True           |
| f846440e-4ca6-4120-abe1-ffddaf1ab555 | Open vSwitch agent | ubuntu | :-)   | True           |
+--------------------------------------+--------------------+--------+-------+----------------+

Users who want to know what happens under the hood can read

  1. How neutron-openvswitch-agent provides L2 connectivity between Instances, DHCP servers and routers
  2. How neutron-l3-agent provides services like routing, natting, floatingIP and security groups
  3. See more of Linux networking capabilities

Cinder

Install Cinder services

apt-get install cinder-api cinder-scheduler cinder-volume lvm2 open-iscsi-utils open-iscsi iscsitarget sysfsutils

Create database and credentials for Cinder

mysql -u root -p
mysql> CREATE DATABASE cinder;
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder_dbpass';
mysql> quit;

Create Cinder related keystone entries

keystone user-create --name=cinder --pass=cinder_pass --email=cinder@example.com
keystone user-role-add --user=cinder --tenant=service --role=admin
keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"
keystone endpoint-create --service=cinder --publicurl=http://10.0.0.1:8776/v1/%\(tenant_id\)s --internalurl=http://10.0.0.1:8776/v1/%\(tenant_id\)s --adminurl=http://10.0.0.1:8776/v1/%\(tenant_id\)s
keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"
keystone endpoint-create --service=cinderv2 --publicurl=http://10.0.0.1:8776/v2/%\(tenant_id\)s --internalurl=http://10.0.0.1:8776/v2/%\(tenant_id\)s --adminurl=http://10.0.0.1:8776/v2/%\(tenant_id\)

Edit /etc/cinder/cinder.conf and replace all the lines with the following.

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = 127.0.0.1
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = rabbit
glance_host = 10.0.0.1

[database]
connection = mysql://cinder:cinder_dbpass@10.0.0.1/cinder

[keystone_authtoken]
auth_uri = http://10.0.0.1:5000
auth_host = 10.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = cinder_pass

Sync the database

cinder-manage db sync

Create physical volume

pvcreate /dev/sdb

Create volume group named “cinder-volumes”

vgcreate cinder-volumes /dev/sdb

Restart all the Cinder services

service cinder-scheduler restart;service cinder-api restart;service cinder-volume restart;service tgt restart

Create a volume to test the setup

cinder create --display-name myVolume 1

List the volume created

cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| e19242b5-8caf-4093-9b81-96d6bb1f7000 | available |   myVolume   |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

Horizon (OpenStack Dashboard)

apt-get install -y openstack-dashboard

After installing login using the following credentials

URL     : http://10.0.0.1/horizon
Username: admin
Password: ADMIN

For an automated OpenStack install, please check OpenStack using SaltStack.

Use the following link to get started with the first instance on OpenStack.

Procedure to get started with the first instance on OpenStack

Advertisements

58 thoughts on “OpenStack Juno on Ubuntu 14.04 LTS and 14.10 – Single Machine Setup

  1. Semi-amusing story I record here for Google’s benefit: when I installed Keystone, it barfed like so: Setting up keystone (1:2014.2.3-0ubuntu1~cloud0) …
    Traceback (most recent call last):
    File “/usr/local/bin/keystone-manage”, line 4, in
    __import__(‘pkg_resources’).require(‘keystone==2014.2.0.dev170.g2e49770’)

    Root cause was some remnants of an earlier devstack install on my HD. apt-get purge keystone followed by a search and destroy misson in the file system (rm -rf /etc/keystone /var/lib/keystone /usr/bin/keystone* /usr/local/bin/keystone* ) and then a reattempt fixed it nicely.

  2. Hi i’m trying this and glance’s section i get the following:
    vedams@rekha:~$ glance image-create –name “cirros” –file /home/vedams/cirros-0.3.4-x86_64-disk.img –disk-format qcow2 –container-format bare –is-public True –progress

    Authentication failure: The request you have made requires authentication. (HTTP 401)

    help me please, thanks!

    1. @rekha, @sureshbabu: looks like you are not authenticating against Keystone. Try a service keystone restart and then test it with a keystone token-get. If that works you should be able to try the glance request again.

  3. Hi I am facing problem while running neutron agent-list,
    root@servernum:~# neutron agent-list
    Request Failed: internal server error while processing your request.

    Can any one help me out?
    root@servernum:~# vi /var/log/neutron/neutron-server.log

    2015-08-27 06:29:01.665 21455 ERROR oslo_messaging.rpc.dispatcher [req-3315f422-e7f8-426f-b2a6-8a13dde96216 ] Exception during message handling: (OperationalError) (1054, “Unknown column ‘agents.load’ in ‘field list'”) ‘SELECT agents.id AS agents_id, agents.agent_type AS agents_agent_type, agents.`binary` AS agents_binary, agents.topic AS agents_topic, agents.host AS agents_host, agents.admin_state_up AS agents_admin_state_up, agents.created_at AS agents_created_at, agents.started_at AS agents_started_at, agents.heartbeat_timestamp AS agents_heartbeat_timestamp, agents.description AS agents_description, agents.configurations AS agents_configurations, agents.`load` AS agents_load \nFROM agents \nW

  4. Please help me to solve the problem……
    > keystone user-create –name=admin –pass=admin –email=admin@localhost
    >/usr/local/lib/python2.7/dist-packages/keystoneclient/shell.py:64: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.
    ‘python-keystoneclient.’, DeprecationWarning)
    WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).
    Internal Server Error (HTTP 500)

  5. Hi ,

    I am getting the below error when launching the instance so please help me to fix this issue.

    Error: Failed to launch instance “vm1”: Please try again later [Error: Build of instance f1e9ec07-b945-4013-be9d-efa7e30ed8fa aborted: Failure prepping block device.].

    Error logs captured from nova-compute.log.

    2015-06-28 11:47:30.756 11380 TRACE nova.compute.manager [instance: e08e0496-1393-460a-aa6e-009c8f8eda15] block_device_mapping) as resources:
    2015-06-28 11:47:30.756 11380 TRACE nova.compute.manager [instance: e08e0496-1393-460a-aa6e-009c8f8eda15] File “/usr/lib/python2.7/contextlib.py”, line 17, in __enter__
    2015-06-28 11:47:30.756 11380 TRACE nova.compute.manager [instance: e08e0496-1393-460a-aa6e-009c8f8eda15] return self.gen.next()
    2015-06-28 11:47:30.756 11380 TRACE nova.compute.manager [instance: e08e0496-1393-460a-aa6e-009c8f8eda15] File “/usr/lib/python2.7/dist-packages/nova/compute/manager.py”, line 2264, in _build_resources
    2015-06-28 11:47:30.756 11380 TRACE nova.compute.manager [instance: e08e0496-1393-460a-aa6e-009c8f8eda15] reason=msg)
    2015-06-28 11:47:30.756 11380 TRACE nova.compute.manager [instance: e08e0496-1393-460a-aa6e-009c8f8eda15] BuildAbortException: Build of instance e08e0496-1393-460a-aa6e-009c8f8eda15 aborted: Failure prepping block device.
    2015-06-28 11:47:30.756 11380 TRACE nova.compute.manager [instance: e08e0496-1393-460a-aa6e-009c8f8eda15]
    2015-06-28 11:47:30.848 11380 INFO nova.network.neutronv2.api [-] [instance: e08e0496-1393-460a-aa6e-009c8f8eda15] Unable to reset device ID for port None
    2015-06-28 11:47:37.511 11380 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
    2015-06-28 11:47:38.561 11380 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 3952, total allocated virtual ram (MB): 1536
    2015-06-28 11:47:38.566 11380 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 14
    2015-06-28 11:47:38.569 11380 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 2, total allocated vcpus: 0
    2015-06-28 11:47:38.570 11380 AUDIT nova.compute.resource_tracker [-] PCI stats: []

    2015-06-28 11:47:38.740 11380 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘controller’, ‘controller’)
    2015-06-28 11:47:38.741 11380 INFO nova.compute.resource_tracker [-] Compute_service record updated for controller:controller
    2015-06-28 11:48:39.512 11380 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
    2015-06-28 11:48:39.890 11380 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 3952, total allocated virtual ram (MB): 1536
    2015-06-28 11:48:39.892 11380 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 14
    2015-06-28 11:48:39.893 11380 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 2, total allocated vcpus: 0
    2015-06-28 11:48:39.895 11380 AUDIT nova.compute.resource_tracker [-] PCI stats: []
    2015-06-28 11:48:39.898 11380 INFO nova.compute.resource_tracker [-] Compute_service record updated for controller:controller

    Below is the nova.conf and compute.conf configuration files.

    [DEFAULT]
    #dhcpbridge_flagfile=/etc/nova/nova.conf
    #dhcpbridge=/usr/bin/nova-dhcpbridge
    logdir=/var/log/nova
    state_path=/var/lib/nova
    lock_path=/var/lock/nova
    force_dhcp_release=True
    iscsi_helper=tgtadm
    libvirt_use_virtio_for_bridges=True
    verbose=True
    connection_type=libvirt
    ec2_private_dns_show_ip=True
    api_paste_config=/etc/nova/api-paste.ini
    enabled_apis=ec2,osapi_compute,metadata
    #libvirt_type=qemu
    #libvirt_type=qemu

    root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
    verbose=True
    rpc_backend = nova.rpc.impl_kombu
    rabbit_host = 127.0.0.1
    rabbit_password = rabbit
    my_ip = 10.0.0.1
    vncserver_listen = 10.0.0.1
    vncserver_proxyclient_address = 10.0.0.1
    novncproxy_base_url=http://10.0.0.1:6080/vnc_auto.html
    glance_host = 10.0.0.1
    auth_strategy=keystone

    network_api_class=nova.network.neutronv2.api.API
    neutron_url=http://10.0.0.1:9696
    neutron_auth_strategy=keystone
    neutron_admin_tenant_name=service
    neutron_admin_username=neutron
    neutron_admin_password=neutron_pass
    neutron_metadata_proxy_shared_secret=openstack
    neutron_admin_auth_url=http://10.0.0.1:35357/v2.0
    linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
    firewall_driver=nova.virt.firewall.NoopFirewallDriver
    security_group_api=neutron

    vif_plugging_is_fatal: false
    vif_plugging_timeout: 0

    [database]
    connection = mysql://nova:nova_dbpass@10.0.0.1/nova

    [keystone_authtoken]
    auth_uri = http://10.0.0.1:5000
    auth_host = 10.0.0.1
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = nova
    admin_password = nova_pass

    ============================

    root@controller:~# cat /etc/nova/nova-compute.conf
    [DEFAULT]
    compute_driver=libvirt.LibvirtDriver
    [libvirt]
    virt_type = qemu/kvm
    root@controller:~#

    ========

    Thanks

  6. In order to make nova able to start a VM the following is also needed in nova-compute.conf:
    virt_type=qemu

    1. Hi,

      ‘virt_type=qemu’ is needed only if your processor is not VT enabled. If your processor is VT enabled you can use the original entry i.e. ‘virt_type=kvm’.

  7. ovs-vsctl add-br br-int
    It says bridge already exists…is it fine or not ?
    ovs-vsctl add-port br-ex eth2
    I have only two interface eth0 and eth1: eth0 is the primary interface which i am using to connect to internet/ router

    Do i need three interfaces ? or i should follow single interface eth0 guide ?

  8. Hello,
    Please update your tutorial. In order to get well in Ubuntu 14.04 you forgot one command (you just have 2 apt-get, but it should be 3):
    apt-­get update ; apt-­get upgrade ; apt­-get dist­-upgrade

  9. Hi, thank you for the tutorial, I am doning this on VirtualBox VM ands used 2 VBox virtual interfaces bridged to a real physical interfaces. I’ve done everything until the end of Neutron. When set up the bridges in OVS I cannot connect/ping to the two-vbox-bridged-interfaces: 🙂
    10.0.0.70 on eth0
    10.0.0.71 on eth1
    Note: when I reboot my VBOX VM it is pingable until Virtual Switch is starting while booting then I lose connectivity.
    Any help? 🙂

    1. I have removed the port added from:
      ovs-vsctl add-port br-ex eth2 (eth1 in my case)
      and it works, but where were the problem?

      1. It worked (i.e. pingable from outside VBox but now cannot create networks in horizon. So please can anyone help?

  10. please help me on this:
    root@openstack:/home/openstack# keystone tenant-create –name=admin –description=”Admin Tenant”
    Gateway Timeout (HTTP 504)
    root@openstack:/home/openstack# keystone tenant-create –name=service –description=”Service Tenant”
    Gateway Timeout (HTTP 504)

    help me on this

      1. 2015-03-31 13:20:44.397 16262 WARNING keystone.openstack.common.versionutils [-] Deprecated: keystone.middleware.core.XmlBodyMiddleware is deprecated as of Icehouse in favor of support for “application/json” only and may be removed in Kilo.
        2015-03-31 13:20:44.413 16262 WARNING keystone.openstack.common.versionutils [-] Deprecated: keystone.contrib.revoke.backends.kvs is deprecated as of Juno in favor of keystone.contrib.revoke.backends.sql and may be removed in Kilo.
        2015-03-31 13:24:41.721 16523 WARNING keystone.openstack.common.versionutils [-] Deprecated: keystone.middleware.core.XmlBodyMiddleware is deprecated as of Icehouse in favor of support for “application/json” only and may be removed in Kilo.
        2015-03-31 13:24:41.736 16523 WARNING keystone.openstack.common.versionutils [-] Deprecated: keystone.contrib.revoke.backends.kvs is deprecated as of Juno in favor of keystone.contrib.revoke.backends.sql and may be removed in Kilo.

      1. Hi, I FOUND THE SOLUTION!!! 🙂 Tomorrow I will try to write a step by step what you need to change the network to work. 🙂 My network work in only one IP and filter MAC address 🙂 See you..

  11. Hi, I have another question :] how to configure network in openstack? I create sample Network and set Subnet and create image Ubuntu. The first problem is login to ubuntu. I found one bug 🙂 auth_region = RegionOne must be auth_region = regionOne, because “keystone endpoint-list” show regionOne in region cool. Next i create keys and have problem how to connect to instance. Always I show in logs:
    2015-03-03 23:51:06,807 –
    DataSourceEc2.py[CRITICAL]:
    Giving up on md from [‘http://169.254.169.254/2009-04-04/meta-data/instance-id’] after 124 seconds

    2015-03-03 23:51:06,816 –
    url_helper.py[WARNING]:
    Calling ‘http://172.16.0.3//latest/meta-data/instance-id’ failed [0/120s]:
    request error [HTTPConnectionPool(host=’172.16.0.3′, port=80):
    Max retries exceeded with url: //latest/meta-data/instance-id (Caused by : [Errno 111] Connection refused)]

    2015-03-03 23:53:06,066 –
    url_helper.py[WARNING]:
    Calling ‘http://172.16.0.3//latest/meta-data/instance-id’ failed [119/120s]:
    request error [HTTPConnectionPool(host=172.16.0.3′, port=80):
    Max retries exceeded with url: //latest/meta-data/instance-id (Caused by : [Errno 115] Operation now in progress)]

    2015-03-03 23:53:13,074 –
    DataSourceCloudStack.py[CRITICAL]:
    Giving up on waiting for the metadata from [‘http://172.16.0.3//latest/meta-data/instance-id’] after 126 seconds

    cc_final_message.py[WARNING]: Used fallback datasource

    OK. When I create instance I found checkbox “Configuration Drive”. When I create instance and run instance error disappeared. Bun I have problem when i login to instance and run ping google.com. It does not work. I unlock all port in Access & Security -> default -> Manage Rules and Add:
    Egress IPv4 TCP 1 – 65535 0.0.0.0/0 (CIDR)
    Ingress IPv4 TCP 1 – 65535 0.0.0.0/0 (CIDR)
    but it still does not work. What do I need to set to started this work?

  12. keystone tenant-create –name=admin –description=”Admin Tenant” when i run this command i get gateway timeout (HTTP 504)

  13. Hi, I am looking for all the time where the error is 🙂 I don’t know where is the problem.. I have single serwer and one network card 🙂 Many thanks in advance. Witold

    $ tail -n 100 /var/log/nova/nova-compute.log
    2015-02-25 13:35:15.799 11394 DEBUG nova.network.base_api [-] Updating cache with info: [VIF({‘profile’: {}, ‘ovs_interfaceid’: None, ‘network’: Network({‘bridge’: None, ‘subnets’: [Subnet({‘ips’: [FixedIP({‘meta’: {}, ‘version’: 4, ‘type’: ‘fixed’, ‘floating_ips’: [], ‘address’: u’192.168.0.2′})], ‘version’: 4, ‘meta’: {‘dhcp_server’: u’192.168.0.1′}, ‘dns’: [], ‘routes’: [], ‘cidr’: u’192.168.0.0/24′, ‘gateway’: IP({‘meta’: {}, ‘version’: None, ‘type’: ‘gateway’, ‘address’: None})})], ‘meta’: {‘injected’: False, ‘tenant_id’: u’73adc5755a8d4134aa8c0f1476025693′}, ‘id’: u’d7d73d03-6ca6-4ace-ba54-3a8fd99a96d0′, ‘label’: u’Server’}), ‘devname’: u’tap05bd425c-ed’, ‘vnic_type’: u’normal’, ‘qbh_params’: None, ‘meta’: {}, ‘details’: {}, ‘address’: u’fa:16:3e:04:4f:c3′, ‘active’: False, ‘type’: u’binding_failed’, ‘id’: u’05bd425c-ed87-4800-87cc-9972f4517c9c’, ‘qbg_params’: None})] update_instance_cache_with_nw_info /usr/lib/python2.7/dist-packages/nova/network/base_api.py:40
    2015-02-25 13:35:15.814 11394 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore “refresh_cache-8ed5e9ce-d872-4d12-a53a-f62448fd6d7b” lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:238
    2015-02-25 13:35:15.814 11394 DEBUG nova.compute.manager [-] [instance: 8ed5e9ce-d872-4d12-a53a-f62448fd6d7b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/dist-packages/nova/compute/manager.py:5350
    2015-02-25 13:35:15.815 11394 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call <bound method Service.periodic_tasks of > sleeping for 7.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132
    2015-02-25 13:35:22.815 11394 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/dist-packages/nova/openstack/common/periodic_task.py:193

    $ tail -n 100 /var/log/keystone/keystone.log
    2015-02-25 12:26:11.675 7213 ERROR keystone.catalog.core [-] Malformed endpoint http://my_server_ip:8776/v2/%(tenant_id) – incomplete format (are you missing a type notifier ?)
    2015-02-25 12:26:11.793 7211 ERROR keystone.catalog.core [-] Malformed endpoint http://my_server_ip:8776/v2/%(tenant_id) – incomplete format (are you missing a type notifier ?)
    2015-02-25 13:27:07.688 7211 ERROR keystone.catalog.core [-] Malformed endpoint http://my_server_ip:8776/v2/%(tenant_id) – incomplete format (are you missing a type notifier ?)
    2015-02-25 13:27:07.800 7211 ERROR keystone.catalog.core [-] Malformed endpoint http://my_server_ip:8776/v2/%(tenant_id) – incomplete format (are you missing a type notifier ?)

    $ tail -n 100 /var/log/neutron/neutron-dhcp-agent.log
    2015-02-25 13:40:32.837 19055 ERROR neutron.agent.dhcp_agent [-] Unable to enable dhcp for d7d73d03-6ca6-4ace-ba54-3a8fd99a96d0.
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent Traceback (most recent call last):
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent File “/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py”, line 128, in call_driver
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent getattr(driver, action)(**action_kwargs)
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent File “/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py”, line 209, in enable
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent interface_name = self.device_manager.setup(self.network)
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent File “/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py”, line 1119, in setup
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent namespace=network.namespace)
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent File “/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py”, line 232, in plug
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent self.check_bridge_exists(bridge)
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent File “/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py”, line 168, in check_bridge_exists
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent raise exceptions.BridgeDoesNotExist(bridge=bridge)
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent BridgeDoesNotExist: Bridge br-int does not exist.
    2015-02-25 13:40:32.837 19055 TRACE neutron.agent.dhcp_agent
    2015-02-25 13:40:32.839 19055 INFO neutron.agent.dhcp_agent [-] Synchronizing state complete

    $ ovs-vsctl list-br
    br-eth0
    br-eth1
    br-ex
    br-int

      1. Hi, i tried run this configuration on Debian – but the kernel-3.14 not have package such as image-header-3.14.. now i reinstall OS and use ubuntu 14.04. Everything works 🙂 but i have problem with ovs-vsctl. When i create configuration:
        ovs-vsctl add-br br-eth0
        ovs-vsctl add-port br-eth0 eth0
        ifconfig br-eth0 MY_ADDRESS_IP up
        ip link set br-eth0 promisc on
        ip link add proxy-br-eth1 type veth peer name eth1-br-proxy
        ip link add proxy-br-ex type veth peer name ex-br-proxy
        ovs-vsctl add-br br-eth1
        ovs-vsctl add-br br-ex
        ovs-vsctl add-port br-eth1 eth1-br-proxy
        ovs-vsctl add-port br-ex ex-br-proxy
        ovs-vsctl add-port br-eth0 proxy-br-eth1
        ovs-vsctl add-port br-eth0 proxy-br-ex
        ip link set eth1-br-proxy up promisc on
        ip link set ex-br-proxy up promisc on
        ip link set proxy-br-eth1 up promisc on
        ip link set proxy-br-ex up promisc on

        and change value in /etc/neutron/plugins/ml2/ml2_conf.ini:
        flat_networks=External_network
        network_vlan_ranges=Physnet1:100:200
        bridge_mappings=Physnet1:br-eth1,External_network:br-ex

        my network dies 🙂 where is the problem? I have only one IP (server kimsufi – and I can not have more).

      2. Hi, Is it that you are saving all these commands on a .sh file and running them from a remote machine?

      3. I log on to the host via ssh and run this comman in bash (I do not make sh file and do not use such ansible).

      4. I believe you lose connectivity as soon as you add run ‘ovs-vsctl add-port br-eth0 eth0’ . This is because the host machine looses control of the port once its added to openvswitch. It is necessary to carry out these steps directly with a keyboard connected to the server. In case you do not have direct access to server look for
        Vinay’s comment on ‘https://fosskb.wordpress.com/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface/’ about adding the port configuration to ‘/etc/network/interfaces’ and rebooting the system.

      5. Thanks, now this working! 🙂 Tomorrow I write how to proceed on my server configuration KIMSUFI. Once again, many thanks for your help 🙂

      6. How to configure network without lost connection (I have only eth0):
        $ ovs-vsctl add-br br-eth0
        $ ovs-vsctl add-port br-eth0 eth0

        Now I have 2min to edit interfaces and reboot system. vim /etc/network/interfaces change as shown below:
        # The loopback network interface
        auto lo
        iface lo inet loopback

        # The primary network interfac
        auto eth0
        iface eth0 inet manual
        ovs_bridge br-eth0
        ovs_type OVSPort
        adress 0.0.0.0
        # address 91.121.xxx.xxx
        # netmask 255.255.255.0
        # network 91.xxx.xxx.xxy
        # broadcast 91.xxx.xxx.255
        # gateway 91.xxx.xxx.254

        # The bridgr interface
        auto br-eth0
        iface br-eth0 inet static
        address 91.121.xxx.xxx
        netmask 255.255.255.0
        network 91.xxx.xxx.xxy
        broadcast 91.xxx.xxx.255
        gateway 91.xxx.xxx.254
        ovs_type OVSBridge
        ovs_ports br-eth0
        bridgr_porte eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0

        now you must quickly reboot. If you lost connection, must login kimsufi panel, run Net Boot and select Rescue Pro. Now you must click Reboot. OVH send email with root password. Login to KVM, and mount disk and change data (if you finish remember unmount disk). Now you must again click Net Boot and select Hard Drive Boot and click Reboot. Now you finish set bridge:
        $ ip link set br-eth0 promisc on
        $ ip link add proxy-br-eth1 type veth peer name eth1-br-proxy
        $ ip link add proxy-br-ex type veth peer name ex-br-proxy
        $ ovs-vsctl add-br br-eth1
        $ ovs-vsctl add-br br-ex
        $ ovs-vsctl add-port br-eth1 eth1-br-proxy
        $ ovs-vsctl add-port br-ex ex-br-proxy
        $ ovs-vsctl add-port br-eth0 proxy-br-eth1
        $ ovs-vsctl add-port br-eth0 proxy-br-ex
        $ ip link set eth1-br-proxy up promisc on
        $ ip link set ex-br-proxy up promisc on
        $ ip link set proxy-br-eth1 up promisc on
        $ ip link set proxy-br-ex up promisc on

        And remember to change the file /etc/neutron/plugins/ml2/ml2_conf.ini:
        flat_networks=External_network
        network_vlan_ranges=Physnet1:100:200
        bridge_mappings=Physnet1:br-eth1,External_network:br-ex

  14. Hi, your tutorial is wonderful! but I have a problem 🙂 I use single serwer and one network interface (i create network used your article: https://fosskb.wordpress.com/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface/). When i create instance always show:
    Error: Failed to launch instance “1”: Please try again later [Error: No valid host was found. Exceeded max scheduling attempts 3 for instance 3bdf2585-4567-4396-bdd6-fd751488cf86. Last exception: [u’Traceback (most recent call last):\n’, u’ File “/usr/lib/python2.7/dist-packages/nova/compute/manager.py”, line 2033, in _do].
    I check errors in tail /var/log/nova/nova-compute.log:
    2015-02-20 11:52:06.638 32124 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/dist-packages/nova/openstack/common/periodic_task.py:193
    2015-02-20 11:52:06.639 32124 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call <bound method Service.periodic_tasks of > sleeping for 3.00 seconds _inner /usr/lib/python2.7/dist-packages/nova/openstack/common/loopingcall.py:132
    Where is the problem? My machine has 4 core, 16 GB RAM, 2 TB HDD. I make tiny instance (1core, 512MB RAM, 5GB HDD) used Cirros – the problem is not the hardware requirements. Could you help me? 🙂

  15. Hello All,
    When I type the command: mysql -u root -p
    I got the following error message:
    mysql: unknown variable ‘default-storage-engine=MYISAM’

    Kindly, let me know the problem. AS I am not able to move further.

    Many thanks in advance.

    Tushar

    1. Hi,

      Have you edited the following lines in /etc/mysql/my.cnf?

      bind-address = 0.0.0.0
      [mysqld]

      default-storage-engine = innodb
      innodb_file_per_table
      collation-server = utf8_general_ci
      init-connect = ‘SET NAMES utf8’
      character-set-server = utf8

      1. yes sir I have editted the /etc/mysql/my.cnf file with the above lines.
        But still the output is same.
        there are two [mysqld] tags. So I am just showing where I have editted.
        1.
        [mysqld]
        #
        # * Basic Settings
        #
        user = mysql
        pid-file = /var/run/mysqld/mysqld.pid
        socket = /var/run/mysqld/mysqld.sock
        port = 3306
        basedir = /usr
        datadir = /var/lib/mysql
        tmpdir = /tmp
        lc-messages-dir = /usr/share/mysql
        skip-external-locking
        #
        # Instead of skip-networking the default is now to listen only on
        # localhost which is more compatible and is not less secure.
        bind-address = 0.0.0.0

        2.
        [mysql]
        default-storage-engine = innodb
        innodb_file_per_table
        collation-server = utf8_general_ci
        init-connect = ‘SET NAMES utf8’
        character-set-server = utf8

  16. we have successfully installed the openstack. we tried to run an instance. we try to ping it from its internal network and it’s okay. but when we tried to ping it from external network, we it cannot be pinged. the internet connection of the external network is doing fine. what do you think is the solution for this? any help will be very much appreciated.

  17. hello everyone.. Good day. im having a problem according to networking of the instances through external network. when I try ;
    ovs-vsctl add-br br-int
    ovs-vsctl add-br br-eth1
    ovs-vsctl add-br br-ex
    ovs-vsctl add-port br-eth1 eth1
    ovs-vsctl add-port br-ex eth2

    the connection become refuse and the dashboard is not accessible when I try to login again..
    thank you for your reply.. 🙂

      1. thank you for your reply. I used physical server to install openstack . many of the user i searched use vmware or virtualbox in installing openstack. so i’am confused about the setting of the network. 🙂

      2. What ip address do you use to connect to the server? And which interface of the server(eth0/eth1/eth2) is configured with this ip address? . The point is that once you add a port to openvswitch then the server can no longer use it. that is why you loose connectivity. If you use eth0 for connecting to server then you can use eth1 and eth2 for openstack. I hope you understand what I mean.

      3. Which interface is the ip(202.x.x.x) configured on eth0/eth1 or eth2. You can check that using ifconfig command.

  18. Hi, this is an amazing tutorial. I have been running devtsack for over a year now but finally understood each component to get openstack working. Now, everything works except for the networking. I can get instances up but can’t ping or ssh into them. My IP was 192.168.1.219 instead of the 10.X.X.X that you used. I see that the host machine has a new virbr0 interface 192.168.122.1. However, the VMs that are launching have an eth0 interface having only IPv6 addresses. I have created two networks with addresses 192.168.1.0/24 and 192.168.122.0/24. VMs launched in both networks seem to be unreachable.

  19. I followed your install, everything went smooth, but I got stuck with creating instances. I create a new instance from Cirros image, but it fails on creating an image for the instance. Could you help me to fix it?

    /var/log/nova/nova-compute.log:

    2015-01-12 16:33:02.539 2551 AUDIT nova.compute.manager [req-84b2e749-b27a-4acb-be60-5f1818e453f2 None] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Starting instance…
    2015-01-12 16:33:02.763 2551 AUDIT nova.compute.claims [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Attempting claim: memory 512 MB, disk 1 GB
    2015-01-12 16:33:02.764 2551 AUDIT nova.compute.claims [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Total memory: 16015 MB, used: 512.00 MB
    2015-01-12 16:33:02.764 2551 AUDIT nova.compute.claims [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] memory limit: 24022.50 MB, free: 23510.50 MB
    2015-01-12 16:33:02.764 2551 AUDIT nova.compute.claims [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Total disk: 492 GB, used: 0.00 GB
    2015-01-12 16:33:02.765 2551 AUDIT nova.compute.claims [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] disk limit not specified, defaulting to unlimited
    2015-01-12 16:33:02.773 2551 AUDIT nova.compute.claims [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Claim successful
    2015-01-12 16:33:02.967 2551 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘test-cluster’, ‘test-cluster’)
    2015-01-12 16:33:03.242 2551 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘test-cluster’, ‘test-cluster’)
    2015-01-12 16:33:04.092 2551 INFO nova.virt.libvirt.driver [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Creating image
    2015-01-12 16:33:04.167 2551 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘test-cluster’, ‘test-cluster’)
    2015-01-12 16:33:04.814 2551 INFO nova.virt.disk.vfs.api [-] Unable to import guestfsfalling back to VFSLocalFS
    2015-01-12 16:33:05.205 2551 ERROR nova.compute.manager [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Instance failed to spawn
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Traceback (most recent call last):
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] File “/usr/lib/python2.7/dist-packages/nova/compute/manager.py”, line 2249, in _build_resources
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] yield resources
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] File “/usr/lib/python2.7/dist-packages/nova/compute/manager.py”, line 2119, in _build_and_run_instance
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] block_device_info=block_device_info)
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] File “/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py”, line 2619, in spawn
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] write_to_disk=True)
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] File “/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py”, line 4150, in _get_guest_xml
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] context)
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] File “/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py”, line 3936, in _get_guest_config
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] flavor, CONF.libvirt.virt_type)
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] File “/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py”, line 352, in get_config
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] _(“Unexpected vif_type=%s”) % vif_type)
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107] NovaException: Unexpected vif_type=binding_failed
    2015-01-12 16:33:05.205 2551 TRACE nova.compute.manager [instance: aafe3871-ff5c-4416-9261-d72faaeac107]
    2015-01-12 16:33:05.207 2551 AUDIT nova.compute.manager [req-84b2e749-b27a-4acb-be60-5f1818e453f2 None] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Terminating instance
    2015-01-12 16:33:05.211 2551 WARNING nova.virt.libvirt.driver [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] During wait destroy, instance disappeared.
    2015-01-12 16:33:05.317 2551 INFO nova.virt.libvirt.driver [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Deleting instance files /var/lib/nova/instances/aafe3871-ff5c-4416-9261-d72faaeac107_del
    2015-01-12 16:33:05.318 2551 INFO nova.virt.libvirt.driver [-] [instance: aafe3871-ff5c-4416-9261-d72faaeac107] Deletion of /var/lib/nova/instances/aafe3871-ff5c-4416-9261-d72faaeac107_del complete
    2015-01-12 16:33:05.556 2551 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘test-cluster’, ‘test-cluster’)

    1. you need to make sure you have correctly specified glance service password in /etc/glance/glance-api.conf. It should be the same as in “keystone user-create …”

  20. Hi JohnsonD,

    Thank you for the how-to, I am going to try this soon. just one question, how would we add the hypervisors? in my case I have a physical box running xen server and another box running esx. I want to add my xen box to this controller – will be grateful if you could share the procedure / requirements – thanks

    kind regards

  21. I tried this (and the install guide from docs.openstack) and there is a problem with Glance, it seems that you get a http error (HTTP 500), npt sure why, seems keystone is working OK, but glance has issues; I have de-installed and re-installed and re-provisioned glance with no joy… Any ideas ? 🙂

  22. Attaching cinder volumes in 14.10 is broken along with many others things. (systemd.. etc..
    ) 14.10 was just slapped together. Very poor inexcusable job Cannonical..

    1. We need not add packages for Juno in Ubuntu 14.10, as this version of Ubuntu is bundled with Juno packages by default. We need to add Juno repository only if it is any other older version of Ubuntu.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s