OpenStack users with busy schedules would often want a stable, tested and repeatable method to deploy their private cloud without human intervention. There are many ways by which you can do it of which devstack and packstack are the most popular. However they are not the only ones available. In this post we summarize yet another method, with its own advantages.
The project makes use of an Infrastructure management software called SaltStack. This method scales from a single machine setup to a multi machine setup with almost no effort from user, and is also repeatable. You may skip the next section if you want to start right away.
SaltStack Overview
SaltStack works on master/minion style. The master schedules, configures and executes actions on the minions. The minions complete their commanded action and return the result to minion. Simple as it may sound but under the hood SaltStack packs a lot of potential. Below I have given a gist of a few Salt feature.
Execution Modules
Remote execution modules are the commands that a master can execute on a minion. These are the ultimate actions that a master may execute on a minion and the list is large and keeps growing because we are talking about an open source technology.
State Modules
States stand for representational states. Using states you define the state in which a minion is supposed to be. The states internally make use of execution modules to achieve the state. Although the existing list of state modules is quiet large, users may add more as per their wish.
Pillar Data
Pillars can be viewed as meta-data that you would want to store on the minion, to be used by your State and Execution modules. In this project we shall make use of pillar to customize our OpenStack installation.
Grains
Grains are informations about the minion collected by salt and are available to you. Using this you may detect the host hardware/software and modify your automation steps accordingly.
SaltStack has many more tools and tricks that you would want to explore.
Getting Started
First install salt-master on a machine that you would use to control the installation. Install salt-minion on all machines on which you plan to host your OpenStack. Salt Installation is OS Dependant. Once done please edit your ‘salt-master’ configuration file at ‘/etc/salt/master’ and update with the following contents.
fileserver_backend: - roots - git gitfs_remotes: - https://github.com/Akilesh1597/salt-openstack.git gitfs_root: file_root ext_pillar: - git: icehouse https://github.com/Akilesh1597/salt-openstack.git root=pillar_root jinja_trim_blocks: True jinja_lstrip_blocks: True
This will create a new environment called ‘icehouse’ in your state tree. The ‘file_roots’ directory of the github repo will have state definitions, which we also call as salt-formulas in a bunch of ‘.sls’ files, while the ‘pillar_root’ directory has your cluster definition files.
For convenience sake I am assuming the ‘salt-master’ is running on ‘192.168.1.10’ and you have a single host running ‘salt-minion’ on ‘192.168.1.11’.
On the minion machines edit ‘/etc/hosts’ and update with below host.
192.168.1.10 salt
Then set the ID for each minion. The master identifies each minion with its ID.
echo 'openstack.icehouse' > /etc/salt/minion_id
Restart the ‘salt-minion’.
service salt-minion restart
Restart the services with appropriate commands for your distro.
Your minion should now register its keys with master. You can check that using..
root@bantu:~# salt-key -L Accepted Keys: Unaccepted Keys: openstack.icehouse Rejected Keys:
Approve your minion.
salt-key -a -y openstack.icehouse
Once done you master shall be able to execute remote commands on minion, with which we shall automate OpenStack install.To test the same.
salt 'openstack.icehouse' test.ping
At this stage you have a single machine, whose ID we have set as ‘openstack.icehouse’ and IP address ‘192.168.1.11’ added to the salt master.
Now lets begin the installation…
salt-run fileserver.update salt '*' saltutil.sync_all salt -C 'I@cluster_type:icehouse' state.highstate
This instructs the minion having ‘cluster_type=icehouse’ defined in its pillar data to download all the formulas defined for it and execute the same. If all goes well you can login to your newly installed OpenStack setup at ‘http://192.168.1.11/horizon’.
But that is not all. We can have a fully customized install with multiple hosts performing different roles, customized accounts and databases. All this can be done simply by manipulating the pillar data.
Customizations
To do a customized install first fork the repo to your git account, clone it to your disk and start changing the pillar data as per your needs.
For example, to do a multi-server install you may add more hosts in ‘cluster_resources.sls’ inside ‘pillar_root’. Please read the project ‘README‘ for other customizations.
Once you have made the changes commit and push the changes to your repo. Then modify your salt master to use your github repo instead of ‘ours‘ and enjoy the installation.
gitfs_remotes: - https://github.com/your-name/salt-openstack.git ext_pillar: - git: icehouse https://github.com/your-name/salt-openstack.git root=pillar_root
If you intend to install ‘juno’ version of OpenStack, simply use the ‘juno‘ branch instead of ‘icehouse‘. Do write back to us for feature request/issues…