OpenStack:Folsom-Multinode

From DocWiki

Revision as of 04:12, 23 October 2012 by Chricker (Talk | contribs)
Jump to: navigation, search

Contents

Overview

In the Cisco OpenStack distribution, a build server outside of the OpenStack cluster is used to manage and automate the OpenStack software deployment. This build server primarily functions as a Puppet server for software deployment and configuration management onto the OpenStack cluster, as well as a Cobbler installation server to manage the PXE boot used for rapid bootstrapping of the OpenStack cluster.

Once the build server is installed and configured, it is used as an out-of-band automation and management workstation to bring up, control, and reconfigure (if later needed) the nodes of the OpenStack cluster. It also functions as a monitoring server to collect statistics about the health and performance of the OpenStack cluster, as well as to monitor the availability of the machines and services which comprise the OpenStack cluster.

Building the environment

Creating a build server

To deploy Cisco OpenStack, first configure a build server. This server has relatively modest hardware requirements: 2 GB RAM, 20 GB storage, Internet connectivity, and a network interface on the same network as the eventual management interfaces of the OpenStack cluster machines are the minimal requirements. This machine can be physical or virtual; eventually a pre-built VM of this server will be provided, but this is not yet available.

Install Ubuntu 12.04 LTS onto this build server. A minimal install with openssh-server is sufficient. Configure the network interface on the OpenStack cluster management segment with a static IP. Also, when partitioning the storage, choose a partitioning scheme which provides at least 15 GB free space under /var, as installation packages and ISO images used to deploy OpenStack will eventually be cached there.

When the installation finishes, log in

Optional: If you have your build server set up behind a non-transparent web proxy, you should export your proxy configuration:

export http_proxy=http://proxy.esl.cisco.com:80
export https_proxy=https://proxy.esl.cisco.com:80

Replace proxy.es1.cisco.com:80 with whatever is appropriate for your environment.

All should now install any pending security updates:

apt-get update && apt-get dist-upgrade -y

Note: The system may need to be restarted after applying the updates.

Next, install a few additional required packages and their dependencies:

apt-get install -y puppet git ipmitool debmirror

Get the Cisco Edition puppet modules from Cisco's GitHub repository:

git clone --recursive -b folsom https://github.com/CiscoSystems/puppet-root ~/cisco-folsom-modules/

Copy the puppet modules from ~/cisco-folsom-modules/modules/ to /etc/puppet/modules/

cp -r ~/cisco-folsom-modules/modules/ /etc/puppet/

Also, get the Cisco Edition example manifests. Under the folsom-manifests GitHub repository you will find different branches, so select the one that matches your topology plans most closely. In the following examples the simple-multi-node branch will be used, which is likely the most common topology:

git clone -b simple-multi-node https://github.com/CiscoSystems/folsom-manifests ~/cisco-folsom-manifests/

Copy the puppet manifests from ~/cisco-folsom-manifests/manifests/ to /etc/puppet/manifests/

cp ~/cisco-folsom-manifests/manifests/* /etc/puppet/manifests

Optional: If your set up is in a private network and your build node will act as a proxy server and NAT gateway for your OpenStack cluster, you need to add the corresponding NAT and forwarding configuration.

iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth1 -j ACCEPT
echo 1 > /proc/sys/net/ipv4/ip_forward

Adjust the network interface specifications as appropriate for your network topology.

Customizing the build server

In the /etc/puppet/manifests directory you will find these three files:

site.pp
cobbler-node.pp
clean_node.sh

At a high level, cobbler-node.pp defines the hardware properties of the individual servers being deployed in the OpenStack cluster. site.pp defines the various parameters that must be set to configure the OpenStack cluster, and also provides the configuration settings for the build server. clean_node.sh is a shell script provided as a convenience to end users; it wraps several cobbler and puppet commands for ease of use when building and rebuilding the nodes of the OpenStack cluster.

IMPORTANT! You must edit these files. They are fairly well documented internally, but please comment with any questions. You can also read through these documents for more details: Cobbler Node and Site

Then, use the ‘puppet apply’ command to activate the manifests:

puppet apply -v /etc/puppet/manifests/site.pp

When the puppet apply command runs, the puppet client on the build server will follow the instructions in the site.pp and cobbler-node.pp manifests and will configure several programs on the build server:

  • ntpd -- a time synchronization server used on all OpenStack cluster nodes to ensure time throughout the cluster is correct
  • tftpd-hpa -- a TFTP server used as part of the PXE boot process when OpenStack nodes boot up
  • dnsmasq -- a DNS and DHCP server used as part of the PXE boot process when OpenStack nodes boot up
  • cobbler -- an installation and boot management daemon which manages the installation and booting of OpenStack nodes
  • apt-cacher-ng -- a caching proxy for package installations, used to speed up package installation on the OpenStack nodes
  • nagios -- a infrastructure monitoring application, used to monitor the servers and processes of the OpenStack cluster
  • collectd --
  • graphite -- a real-time graphing system for displaying metrics and statistics about OpenStack

A reboot at this point is advised, as it seems that the puppetmaster doesn’t restart correctly otherwise.

If everything worked, the systems listed in cobbler-node.pp will be defined in cobbler:

# cobbler system list
   control
   compute01
   compute02
# 

And now, you should be able to use cobbler to build your controller:

/etc/puppet/manifests/clean_node.sh {node_name} example.com

or if you want to do it for all of the nodes defined in your cobbler-node.pp file:

for n in `cobbler system list`; do clean_node.sh $n example.com ; done

note: replace example.com with your node's proper domain name.

clean_node.sh is a script which does several things:

  • configures Cobbler to PXE boot the specified node with appropriate PXE options to do an automated install of Ubuntu
  • uses Cobbler to power-cycle the node
  • removes any existing client registrations for the node from Puppet, so Puppet will treat it as a new install
  • removes any existing key entries for the node from the SSH known hosts database

When the script runs, you may see errors from the Puppet and SSH clean up steps if the machine did not already exist in Puppet or SSH. This is expected, and not a cause for alarm.

Testing OpenStack

Once the nodes are built, and once puppet runs (watch /var/log/syslog on the cobbler node), you should be able to log into the openstack horizon interface:

http://ip-of-your-control-node/horizon/ user: admin, password: Cisco123 (if you didn’t change the defaults in the site.pp file)

you will still need to log into the console of the control node to load in an image: user: localadmin, password: ubuntu. If you SU to root, there is an openrc auth file in root’s home directory, and you can launch a test file in /tmp/nova_test.sh.

Rating: 3.5/5 (37 votes cast)

Personal tools