|Line 99:||Line 99:|
=== corosync ===
=== corosync ===
=== dnsmasq ===
=== dnsmasq ===
Revision as of 23:59, 13 May 2013
The directory structure of a COE install will look like this:
/etc/puppet /etc/puppet/files /etc/puppet/manifests /etc/puppet/modules /etc/puppet/templates
Is created when puppet is installed. It should essentially contain this on the appropriate branch (usually multi-node). This repo contains the manifests and templates directories where the one part of COE will reside.
Contains the site.pp, which is used to customise the install in common ways, such as configuring the network settings and defining which nodes to manage. Site.pp is internally documented and will be different for each site. This directory also has core.pp and cobbler_node.pp: core.pp is used to provide a clean interface between the Openstack puppet modules and the user-facing site.pp; cobbler_node.pp is specifically targeted at managing the cobbler module. The scripts in the directory are helpers that perform the following functions:
clean_node.shIs used to wipe the puppet cert of a node and set it to install on next boot. Usage:
puppet-modules.shIs used to install all the modules in modules.list via apt. Usage
reset_build_node.shWill clean up the build node so that a subsequent puppet apply will test with a (roughly) clean slate. Purges installed packages, removes var directories and some config files. Usage:
reset_nodes.shRemoves all nodes from cobbler db and then runs puppet to insert them again. Usage:
Contains a template for
/etc/network/interfaces. This can be modified by sites that have complex networks to meet their requirements. It is applied via a late command and not directly managed by puppet.
The second file in this directory can be used to move IP addresses between physical nics and Openvswitch ports and is not needed in the majority of cases. It is controlled by
numbered_vs_port in site.pp.
Either of these locations can be used to house the puppet modules. The COE apt packages will install them to
apache cobbler concat drbd horizon memcached naginator openstack puppet rsync sysctl apt coe corosync glance inifile monit nova openstack_admin quantum ssh vswitch apt-cacher-ng collectd dnsmasq graphite keystone mysql ntp pip rabbitmq stdlib xinetd
Understanding the install sequence
Because COE handles both provisioning of the base OS and deployment of applications the install sequence for a node is quite long. The install takes the following steps:
- COE puppet modules and manifests are installed on the build node from either git or apt
- Puppet apply is run on the build node, which does the following:
- Cobbler is installed on the build node
- The Ubuntu 12.04 install image is loaded into cobbler using
- The out-of-band information for the target node is inserted into the cobbler database
- A preseed file is created to automate the install, which has the following in the late command:
- Sets puppet to run after the node has booted, depending on whether
autostart_puppethas been set
- Sets the puppetmaster address and sync interval in
- Syncs to the build node ntp server
- Optionally disables IPv6 router advertisement
- Optionally installs the ethernet bonding module
/etc/puppet/templates/interfaces.erbin the late command
- HTTP posts to cobbler on the build node to say that the install completed successfully
- HTTP posts to cobbler on the build node to request no more install boots (so the machine will receive a PXE command to boot from local disk instead)
- The target node is rebooted using the clean_node.sh script on the build node, or by hand, which will install ubuntu and run everything in step 2.4
- The target node will finish the install, reboot, then boot into the newly installed OS
autostart_puppethas been set, the node will run puppet agent, and install everything needed for either a control or a compute node.
Most of the complexity here is tied up in the late command. If you need to add a system module and want it to be available as soon as the system reboots then this is the place to put it. The easiest way to do this is by modifying cobbler_node.pp and adding lines to the late command there. This is an example of a general guideline: try to avoid modifying the puppet modules, and instead change things from the manifests folder where possible.
[DEPRECATED] Manages the apache http daemon. Apache is used by Horizon, puppetmaster and Graphite among other things, but is handled by requiring the apache package and creating site-enabled entries instead of using the module.
The base node as defined in
core.pp defines an apt::source which contains the PGP key for Cisco's Openstack and puppet packages. This means every node has access to the Cisco apt repo.
This manages the apt-cacher-ng daemon which greatly accelerates the install process by eliminating the need for all nodes to install from the internet. The build node runs the apt cacher, which is defined in
The cobbler module is used to install and maintain the core functionality of the build node: deploying servers. The cobbler module is configured via cobbler-node.pp in the manifests folder. The module itself is not very mature, and it is conceivable that an advanced developer may need to customise this module in order to change some part of the node install process. A good example would be installing 32 bit ubuntu instead of 64, which would require a different arch to be passed into the ubuntu class in the cobbler module, so that cobbler-ubuntu-import will bring in the correct install image. Cobbler also manages dhcp, dnsmasq, PXE, tftp and some http services. The module itself is quite barebones and should be easy to extend if needed.
This is a very small module that adds a web page on the build server with links to other services such as Horizon, Nagios and Graphite.
Collectd is a metrics collection system. This module will install the collectd client, and point the client at the graphite server (on the build node).
This is a puppet module for constructing files out of fragments. It is used by the glance and keystone modules.
[DEPRECATED] Corosync is used by the openstack_admin class to provide HA services to the controller.