In this blog, I will cover steps to install Openstack Icehouse release using Devstack. I will cover Single node installation as well as Multi-node installation. In a single node installation, both control and compute instance runs in the same VM. In multi-node install, control and compute instance runs in 1 VM/host and a compute instance runs in another separate VM/host. In multi-node install, we can spin as many compute instances as needed. With 12GB RAM in my host, I was running out of memory with single control and compute instance and a bunch of host applications running in my system.
- Windows 7 with Virtualbox 4.3.10
- 12 GB RAM
I have created OVA files for both the controller and compute instance that can be downloaded from below links. Import the OVA files into any VM manager software like Virtualbox to try this out. These VMs are running Ubuntu 12.04.
Alternate locations for downloading VMs in case the Dropbox link above does not work:
Following are the steps that I did to create the VMs above. If you are downloading the OVA files, you don’t need to follow the steps below.
- Clean install 12.04 ubuntu iso
- Setup controller instance with 4 CPU and 4 GB ram. Setup compute instance with 2 CPU and 2 GB ram. Controller instance needs more memory since majority of Openstack components run here.
- In Virtualbox networking option, use 1 NAT interface and 1 host-only interface.
- Do OS updates using “sudo apt-get update”, “sudo apt-get upgrade”, “sudo apt-get dist-upgrade”.
- Reboot system.
- Install git – “sudo apt-get install git”
- Get icehouse stable code – ” git clone -b stable/icehouse https://github.com/openstack-dev/devstack.git “.
- Update “localrc” in “devstack” directory. “localrc” file tells the “stack.sh” script to start appropriate Openstack services. Controller localrc can be got from here. Compute localrc can be got from here. In the localrc files. Update the “OFFLINE” flag to “False” the very first time stacking is done. From the next time onwards, “OFFLINE” flag can be set to “True”. This allows the updates to be fetched the first time.
- Login to both controller and compute instance using username “openstack” and password “openstack”.
- First do the stacking for controller VM.
- Change to devstack directory – “cd devstack”.
- Open “localrc” file and search for “EDITME”. Change HOST_IP to the ip address of the eth1 interface that got assigned to your VM.
- Stack it – “./stack.sh”
- Repeat the same steps above for compute VM. For localrc, change HOST_IP to be the ip address of compute instance and SERVICE_HOST to be the ip of control instance.
- There will be few packages that will be missing in the default Ubuntu installation which “stack.sh” will complain. We just need to install these packages and run “stack.sh” again. Some of them that I could remember are “curl”, “pip”, “python-dev”, “netifaces”.
If only the control instance is used, VMs are spawned in the control instance itself. If both the control and compute instance is used, VMs are scheduled in a round robin fashion between control and compute instance. If above steps are successful, you can access the Openstack horizon interface using the host ip address.
Demo of Single node Openstack Icehouse install and usage:
Demo of Multi node install and usage:
Ubuntu 14.04 update:
Recently, I tried devstack with Ubuntu 14.04 and the “localrc” files mentioned above works with Ubuntu 14.04 as well. I had to install few additional packages to make devstack work. Multiple folks had raised issues with Openstack installation with 14.04 using the procedure above. 1 thing that I realized is that since we keep the “OFFLINE” flag in localrc as “true” in the first run of “stack.sh”, there are package dependencies that change over time and there cannot be a standard set of packages that will work all the time. To simplify this, I have created 2 options:
- I have created a OVA file similar to the one that I created above for Ubuntu 12.04. This can be downloaded from here and installed in Virtualbox or VMware player. I have kept the “OFFLINE” flag as “true” in localrc. You can keep that as it is and just change the IP address to what your address is. Keeping the “OFFLINE” flag as “true” will remove new package dependencies. After you download the OVA file and start the VM, you need to change the IP address in “localrc” and then run “stack.sh”.
- I have created a Vagrantfile with the configuration and steps needed to install Icehouse devstack on Ubuntu 14.04. For more details on Vagrant, pls refer my other blog on Vagrant. Following are the steps needed with this option. Before doing the steps below, install “Virtualbox” and “Vagrant”. I have set the IP address to “192.168.60.100” in Vagrantfile. You can change this in Vagrantfile if you need a different address based on your environment. This will be a host-only IP address in Virtualbox.
git clone https://github.com/smakam/vagrant.git cd vagrant/devstack vagrant up
I have another project with vagrant under vagrant/odl, you can ignore it. (I need to figure out a way where git can checkout sub-directories under a repository). After the steps above, you will see the VM running inside Virtualbox. To start the ssh session to VM, do the below command from the same directory.
To do the stacking, change to “devstack” directory and edit the IP address in “localrc” to the address that you have set in the Vagrantfile. Set the “OFFLINE” flag to “false” for the first run and thereafter set the flag to “true”. I tested the Vagrantfile in Windows environment. Since the host-only IP address in Virtualbox is accessible from localhost, you can directly access the Horizon UI from the local web browser using the IP address specified in Vagrantfile.
stack.sh - To start stacking unstack.sh - Cleanup stacking done. I use it when I change local configuration and to redo stacking clean.sh - To clean up stacking environment completely rejoin_stack.sh - To connect to existing screen session. This is very helpful when we restart VM and we want to connect to previous stacking session.
Stacking local configuration variables were specified in localrc. Recently, there is a new format specified to store configuration variables as described here.
Debugging issues with stacking:
First step would be to look for logs under “/opt/stack/logs”. There is a summary log file and 1 more detailed log file that could have the reason for failure. Logs for each Openstack service can be viewed using screen. Screen keeps the terminal/session active even when we detach from it. Also, screen allows us to view multiple terminals in a single terminal. I was not familiar with screen earlier, following commands are useful:
screen -x - list screens screen -x - goto particular screen session ./rejoin_stack.sh - open a screen session for devstack ctrl-a-n - next screen ctrl-a-p - previous screen ctrl-a-[ - copy mode, allows us to scroll the screen using arrow keys ctrl-] - stop copy mode ctrl-a-number - goto particular screen ctrl-a-d - detach from screen ctrl-a-" - List all screens, allows us to scroll down to the particular screen we need
To start and stop a service:
ctrl-c - stops the service Use up arrow for the previous command and start the service