OpenStack: Icehouse: 2-Role

From DocWiki

Jump to: navigation, search


Cisco OpenStack Installer

Before you begin

Please read the release notes for the release you are installing. The release notes contain important information about limitations, features, and how to use snapshot repositories if you're installing an older release.

Build Node Configuration

To create the build node, follow the build node setup instructions from the Cisco OpenStack Installer Deployment Guide.

After the script completes, do the following:

  • Update /etc/puppet/data/role_mappings.yaml to reflect your short hostnames for build, control and compute roles.


build-server: build
control-server01: controller
compute-server01: compute
compute-server02: compute
  • Configure the data model to reflect your topology, as described next.

Configuration and Customization

At this point, the data model has been installed and any required customizations should be made to the data model. Listed below are specific changes required for the 2 role setup using the ml2 plugin with the openvswitch driver.

In /etc/puppet/data/hiera_data/user.common.yaml

Required Config

######### Node Addresses ##############
# Change the following to the short host name you have given your build node.
# This name should be in all lower case letters due to a Puppet limitation
# (refer to
build_node_name: build-server

# Change the following to the host name you have given to your control
# node.  This name should be in all lower case letters due to a Puppet
# limitation (refer to
coe::base::controller_hostname: control-server

# The IP address to be used to connect to Horizon and external
# services on the control node.  In the compressed_ha or full_ha scenarios,
# this will be an address to be configured as a VIP on the HAProxy
# load balancers, not the address of the control node itself.

# The IP address used for management functions (such as monitoring)
# on the control node.  In the compressed_ha or full_ha scenarios, this will
# be an address to be configured as a VIP on the HAProxy
# load balancers, not the address of the control node itself.

# Controller public url
controller_public_url: ""

# Controller admin url
controller_admin_url: ""

# Controller admin url
controller_internal_url: ""

Connectivity Config

# This domain name will be the name your build and compute nodes use for the
# local DNS.  It doesn't have to be the name of your corporate DNS - a local
# DNS server on the build node will serve addresses in this domain - but if
# it is, you can also add entries for the nodes in your corporate DNS
# environment they will be usable *if* the above addresses are routeable
# from elsewhere in your network.

########### NTP Configuration ############
# Change this to the location of a time server or servers in your
# organization accessible to the build server.  The build server will
# synchronize with this time server, and will in turn function as the time
# server for your OpenStack nodes.

# Control node interfaces.
# internal_ip be used for the ovs local_ip setting for GRE tunnels.

# This sets the IP for the private(internal) interface of controller nodes
# (which is predefined already in $controller_node_internal, and the internal
# interface for compute nodes.  It is generally also the IP address
# used in Cobbler node definitions.
internal_ip: "%{ipaddress_eth3}"

# The external_interface is used to provide a Layer2 path for
# the l3_agent external router interface.  It is expected that
# this interface be attached to an upstream device that provides
# a L3 router interface, with the default router configuration
# assuming that the first non "network" address in the external
# network IP subnet will be used as the default forwarding path
# if no more specific host routes are added.
external_interface: eth2

# The public_interface will have an IP address reachable by
# all other nodes in the openstack cluster.  This address will
# be used for API Access, for the Horizon UI, and as an endpoint
# for the default GRE tunnel mechanism used in the OVS network
# configuration.
public_interface: eth1

# The interface used for VM networking connectivity.  This will usually
# be set to the same interface as public_interface.
private_interface: eth1

Cobbler Configuration

If the build node runs a cobbler server, then modify these parameters as per the testbed.

### Cobbler config
# The IP address of the node on which Cobbler will be installed and
# on which it will listen.

# The subnet address of the subnet on which Cobbler should serve DHCP
# addresses.
node_subnet: ''

# The netmask of the subnet on which Cobbler should serve DHCP addresses.
node_netmask: ''

# The default gateway that should be provided to DHCP clients that acquire
# an address from Cobbler.
node_gateway: ''

# The admin username and crypted password used to authenticate to Cobbler.
admin_user: localadmin
password_crypted: $6$UfgWxrIv$k4KfzAEMqMg.fppmSOTd0usI4j6gfjs0962.JXsoJRWa5wMz8yQk4SfInn4.WZ3L/MCt5u.62tHDGB36EhiKF1

# Cobbler can instruct nodes being provisioned to start a Puppet agent
# immediately upon bootup.  This is generally desirable as it allows
# the node to immediately begin configuring itself upon bootup without
# further human intervention.  However, it may be useful for debugging
# purposes to prevent Puppet from starting automatically upon bootup.
# If you want Puppet to run automatically on bootup, set this to true.
# Otherwise, set it to false.
autostart_puppet: true

# Cobbler installs a minimal install of Ubuntu. Specify any additional
# packages which should be part of the base install on top of the minimal
# install set of packages
packages: 'openssh-server vim vlan lvm2 ntp'

# If you are using Cisco UCS servers managed by UCSM, set the port on
# which Cobbler should connect to UCSM in order to power nodes off and on.
# If set to 443, the connection will use SSL, which is generally
# desirable and is usually enabled on UCS systems.
ucsm_port: 443

# The name of the hard drive on which Cobbler should install the operating
# system.
install_drive: /dev/sda


### The following are passwords and usernames used for
### individual services.  You may wish to change the passwords below
### in order to better secure your installation.
cinder_db_password: cinder_pass
glance_db_password: glance_pass
keystone_db_password: key_pass
nova_db_password: nova_pass
network_db_password:   quantum_pass
database_root_password: mysql_pass
cinder_service_password: cinder_pass
glance_service_password: glance_pass
nova_service_password: nova_pass
ceilometer_service_password: ceilometer_pass
admin_password: Cisco123
admin_token: keystone_admin_token
network_service_password: quantum_pass
rpc_password: openstack_rabbit_password
metadata_shared_secret: metadata_shared_secret
horizon_secret_key: horizon_secret_key
ceilometer_metering_secret: ceilometer_metering_secret
ceilometer_db_password: ceilometer
heat_db_password: heat
heat_service_password: heat_pass
heat::engine::auth_encryption_key: 'notgood but just long enough i think'

# Set this parameter to use a single secret for the Horizon secret
# key, neutron agents, Nova API metadata proxies, swift hashes,etc.
# This prevents you from needing to specify individual secrets above,
# but has some security implications in that all services are using
# the same secret (creating more vulnerable services if it should be
# compromised).
secret_key: secret

# Set this parameter to use a single password for all the services above.
# This prevents you from needing to specify individual passwords above,
# but has some security implications in that all services are using
# the same password (creating more vulnerable services if it should be
# compromised).
password: password123

Disk Partitioning

When deploying nodes with Cobbler, you can control some aspects of disk partitioning by placing the following directives in /etc/puppet/data/hiera_data/user.common.yaml. If you do not want a separate /var from /, set enable_var to false. If you do not want extra disk space set aside in an LVM volume to use for Cinder volumes via iSCSI, set enable_vol_space to false (you likely want this true if you want to use iscsi volumes on compute nodes). You can specify the minimum sizing of /var and / partitions using the var_part_size and root_part_size directives, respectively. The values should be specified in units of megabytes.

#### Disk partitioning options ###########
# expert_disk is needed for fine-grained configuration of drive layout
# during Cobbler-driven installs. It is also required when installing
# on large drives that require GPT
expert_disk: true

# The /var directory is where logfiles and instance data are stored
# on disk.  If you wish to have /var on it's own partition (considered
# a best practice), set enable_var to true.
enable_var: true

# The Cinder volume service can make use of unallocated space within
# the "cinder-volumes" volume group to create iscsi volumes for 
# export to instances.  If you wish to leave free space for volumes
# and not preallocate the entire install drive, set enable_vol_space
# to true.
enable_vol_space: true

# Use the following two directives to set the size of the / and /var
# partitions, respectively.  The var_part_size directive will be ignored
# if enable_var is not set to true above.
root_part_size: 65536
var_part_size: 432000

Advanced Options

The following options can be provided to your build node to enable special features during baremetal provisioning. They can be placed in a host override file (/etc/puppet/data/hiera_data/hostname/INSERT_YOUR_HOSTNAME_HERE.yaml) or in /etc/puppet/data/hiera_data/user.common.yaml.

To install a specific kernel package and set it to be the kernel booted by default, set this directive:

load_kernel_pkg: 'linux-image-3.2.0-51-generic'

To specify command-line options that should be passed to the kernel at boot time, set this directive:

kernel_boot_params: 'quiet splash elevator=deadline'

Note: using "elevator=deadline" is currently recommended for those using iscsi volumes in Cinder as some I/O issues have been reported using the default elevator.

To set the timezone on the clock for nodes booted via Cobbler, set this directive:

time_zone: US/Eastern

ML2 Related Config Changes

# For ML2, uncomment this line
  - ''
  - ''
  - ''
  - ''

# ml2 hack. Current packages have ml2 enabled, even if you do not
# use the driver. This param corrects the configuration files.
# This is enabled by default. If you are using ml2, change to false.
disableml2: false

# For ML2, uncomment this line
neutron::core_plugin: 'ml2'

Based on the type of deployment you have, make the following changes in the specific scenario file found at data/scenarios/(2_role, 3_role, all_in_one, compressed_ha, full_ha, swift). For the HA deployments, the change shown below needs to be made on additional class groups as well, there are comments in the files for the same. Make those additonal changes as well

      - coe::base
      - "nova::%{rpc_type}"
      - glance_all
      - keystone_all
      - cinder_controller
      - nova_controller
      - horizon
      - ceilometer_controller
      - heat_all
      - "%{db_type}_database"
#      - network_controller
# For ML2, Uncomment this and comment above line
      - network_controller_ml2
      - test_file
      - coe::base
      - cinder::setup_test_volume
#      - nova_compute
# For ML2, Uncomment this and comment above line
      - nova_compute_ml2
      - cinder_volume
      - ceilometer_compute


You will also need to map the roles in your selected scenario to hostnames. This is done in /etc/puppet/data/role_mappings.yaml:

vi /etc/puppet/data/role_mappings.yaml
control-server: controller
control-server01: controller
control-server02: controller
control-server03: controller

compute-server: compute
compute-server01: compute
compute-server02: compute
compute-server03: compute

all-in-one: all_in_one

build-server: build

load-balancer01: load_balancer
load-balancer02: load_balancer

swift-proxy01: swift_proxy
swift-proxy02: swift_proxy

swift-storage01: swift_storage
swift-storage02: swift_storage
swift-storage03: swift_storage

If you chose to set the hostname of your build node to something other than "build-server", you may also need to set up a few Cobbler-related directives in a host override file. In /etc/puppet/data/hiera_data/hostname/build-server.yaml you'll find several directives used by cobbler:

# set my puppet_master_address to be fqdn
puppet_master_address: "%{fqdn}"
cobbler_node_ip: ''
node_subnet: ''
node_netmask: ''
node_gateway: ''
admin_user: localadmin
password_crypted: $6$UfgWxrIv$k4KfzAEMqMg.fppmSOTd0usI4j6gfjs0962.JXsoJRWa5wMz8yQk4SfInn4.WZ3L/MCt5u.62tHDGB36EhiKF1
autostart_puppet: true
ucsm_port: 443
install_drive: /dev/sda
#ipv6_ra: 1
#interface_bonding = 'true'

Copy these into your /etc/puppet/data/hiera_data/hostname/[insert your build node hostname here].yaml file and set them to appropriate values for your build node.

The sample config files for a 2_role ml2+ovs plugin using gre tunnel can be found here

Cobbler Configuration

The 2_role and full_ha scenarios configure a build server which provides puppetmaster and, optionally, cobbler bare metal deployment services for managing the remaining nodes in the OpenStack cluster. To use cobbler, you will need to set up role mappings in /etc/puppet/data/role_mappings.yaml as previously described. You will also need to provide basic hardware details about each of the hardware nodes being managed by this instance of cobbler, including the MAC addresses of the network boot interfaces, machine host names, machine IP addresses, and management account information for the UCSM or CIMC management of the nodes. Define these parameters by editing the /etc/puppet/data/cobbler/cobbler.yaml file:

When editing the file, keep in mind that it has 4 major sections: preseed, profile, node-global, and 1 or more individual node definitions.

Cobbler preseed

Cobbler uses IPMI for power management of UCS C-Series servers.

The Cobbler preseed stanza in cobbler.yaml defines parameters that customize the preseed file that is used to install and configure Ubuntu on the servers. Parameters you might need to adjust in here include things like default repos to include in hosts, and where these repos are located.

  repo: " icehouse main"

Cobbler profile

The Cobbler profile stanza in cobbler.yaml defines options that specify the cobbler profile parameters to apply to the servers. This section typically will not require customization.

Icehouse is deployed by default over Ubuntu Trusty, make sure you point the log_host parameter to your build server ip

  name: "trusty"
  arch: "x86_64"
  kopts: "log_port=514 \
priority=critical \
local=en_US \
log_host= \

  name: "precise"
  arch: "x86_64"
  kopts: "log_port=514 \
priority=critical \
local=en_US \
log_host= \

Cobbler node-global

The Cobbler node-global stanza in cobbler.yaml specifies various configuration parameters which are common across all servers in the cluster, such as gateway addresses, netmasks, and DNS servers. Power parameters which are standardized are also included in this stanza. You will likely need to change several parameters in this section, including power-type, power-user, power-pass, get_nameservers, get_ipaddress, get_gateway, no_default_route, partman-auto, and others.

  #power_type should be ipmilan for CIMC, or ucs for UCS-M
  power_type: "ipmilan"
  power_user: "admin"
  power_pass: "password"
  kickstart: "/etc/cobbler/preseed/cisco-preseed"
  kopts: "netcfg/get_nameservers= \
netcfg/confirm_static=true \
netcfg/get_ipaddress={$eth0_ip-address} \
netcfg/get_gateway= \
netcfg/disable_autoconfig=true \
netcfg/dhcp_options=\"Configure network manually\" \
netcfg/no_default_route=true \
partman-auto/disk=/dev/sda \
netcfg/get_netmask= \

For security the default deployment assumes no default route for individual OpenStack nodes, with the build server used as a jump box to access them (and any VM access done through provider networks extended by bridging to the VMs on the OpenStack nodes). If you need a default route on the OpenStack infrastructure nodes, be sure to delete the line above which says not to add a default route:

netcfg/no_default_route=true \

Cobbler node definitions

Each individual node being managed by Cobbler is listed as a separate node definition. The node definition for each host defines, at a minimum, the hostname and interface configuration information for each server, as well as any other parameters which aren't defined in node-global. Create one stanza here for each node, using a format like this example.

  hostname: ""
  power-address: ""
      mac-address: "a1:bb:cc:dd:ee:ff"
      dns-name: ""
      ip-address: ""
      static: "0"

Other Nodes

After setting up the build node, all other nodes can be deployed via one of two methods.

First: if you intend to use Cobbler to perform baremetal provisioning of the node, you can use the script located in scripts/

cd /root/puppet_openstack_builder/scripts
bash node01

Doing so will clean out any prior puppet certificates, enable netboot for the node in question, and power cycle the node. When the machine reboots, it will start a PXE install of the baremetal operating system. Once the operating system is installed, the machine will reboot again and a puppet agent will be started. The agent will immediately commence installing OpenStack.

If you have pre-provisioned your server and don't wish to use Cobbler for baremetal provisioning, you can run the following commands on the node instead:

apt-get install -y git
cd /root
git clone
cd puppet_openstack_builder
git checkout i.1
cd install-scripts
export build_server_ip=<YOUR_BUILD_NODE_IP>
puppet agent -td --server=<YOUR_BUILD_NODE_FQDN> --pluginsync

This will setup Puppet and then perform a catalog run. If you already have Puppet 3.2 installed or have a repository from which it can be installed already set on the node, you simply run the script directly from curl rather than cloning the repository first:

apt-get install -y curl
export build_server_ip=<YOUR_BUILD_NODE_IP>
bash <(curl \-fsS
puppet agent -td --server=<YOUR_BUILD_NODE_FQDN> --pluginsync

Note: The setup script should only be run once. If you want to force an update after the initial run, simply restart the Puppet agent (using service puppet restart if Puppet is set up to run automatically or puppet agent -td --server=<YOUR_BUILD_NODE_FQDN> --pluginsync if it isn't) to perform another catalog run.

Global Parameters

A couple of parameters are defined in a "global" parameter hierarchy separate from the user.$scenario.yaml files which contain most user settings. These global parameters take precedence over any user parameters, and are intended to set global "defaults" for topological choices like:

  • what storage backend do I use for Cinder?
  • what storage backend do I use for Glance?
  • what tunneling technology do I use for network segregation?

These settings are found in configuration files under data/global_hiera_params

Selecting a Cinder backend

In COI deployments Cinder can be backed by either Ceph block storage, or iSCSI block storage. Note that Ceph is generally the preferred choice in HA installations.

To choose Ceph, edit /etc/puppet/data/global_hiera_params/common.yaml and change the cinder_backend setting to rbd (and also see further instructions below about configuring Ceph):

cinder_backend: rbd

To choose iSCSI, edit /etc/puppet/data/global_hiera_params/common.yaml and change the cinder_backend setting to iscsi:

cinder_backend: iscsi

Selecting a Glance backend

In COI deployments Glance can be backed by local file storage, or by Swift object storage, or by Ceph object storage. Either Swift or Ceph object storage are appropriate choices for HA installations.

To choose Swift, edit /etc/puppet/data/global_hiera_params/common.yaml and change the glance_backend setting to swift:

glance_backend: swift

To choose Ceph, edit /etc/puppet/data/global_hiera_params/common.yaml and change the glance_backend setting to rbd (and also see further instructions below about configuring Ceph):

glance_backend: rbd

To choose local file storage, edit /etc/puppet/data/global_hiera_params/common.yaml and change the glance_backend setting to file:

glance_backend: swift

Advanced network topologies

In COI HA deployments, global settings for network topologies are defined in scenario-specific files. data/global_hiera_params/scenario/compressed_ha.yaml sets defaults for compressed_ha deployments while data/global_hiera_params/scenario/full_ha.yaml sets defaults for full_ha deployments.

The default values set in these files work for the defined "reference" topologies targeted by COI. These files contain several settings that may need changing in more complex network configurations, however. Examples include choice of segregation technology, or type of network plugin used.

Adding Ceph to Your OpenStack Nodes

Adding Ceph services to a node is trivial. The core ceph configuration data is in /etc/puppet/data/hiera_data/user.common.yaml Here you can configure the particulars of your cluster. Most items can stay as their defaults. You will likely need to modify all the networking options.

Once you've modified this file, you will need to create a hostname override file for your target server. This is stored in /etc/puppet/hiera_data/hostname/, with ceph01.yaml as an example. Here, you specify the disks to use as OSDs on the target node. You don't need to add any data here if you are adding a monitor to the target server.

There are three Ceph classgroups: ceph_mon, ceph_osd, and ceph_all. ceph_all aggregates ceph_mon and ceph_osd. You can add ceph services to a particular role by adding these to the target role configuration file. Eg if you want all compute nodes to offer OSD, you can add the line ceph_osd to the compute.yaml file in /etc/puppet/data/classgroups. To add services only to specific servers that will be a subset of a large scenario, you need to clone the existing scenario to a new name and add the ceph services to the new scenario. Then configure your node(s) in role_mappings.yaml to use the new scenario.

Once this is complete, the next puppet run on the target servers will bring your cluster up and online.

  • There is a caveat to this. There can be only one initial ceph monitor specified in user.common.yaml. This ensures that the node running the primary mon is considered primary and comes up first. Additional mons and osds are then added during their respective hosts' puppet run.*

To configure Cinder and Glance to use Ceph for storage, you will also need to configure the following. This can be done independently of the cluster deployment process:

In /etc/puppet/data/global_hiera_params/common.yaml: (or if you are using compressed HA: /etc/puppet/data/global_hiera_params/scenario/compressed_ha.yaml

cinder_backend: rbd
glance_backend: rbd

In /etc/puppet/data/hiera_data/cinder_backend/rbd.yaml:

cinder::volume::rbd::rbd_pool: 'volumes'
cinder::volume::rbd::glance_api_version: '2'
cinder::volume::rbd::rbd_user: 'admin'
# keep this the same as your ceph_monitor_fsid
cinder::volume::rbd::rbd_secret_uuid: 'e80afa94-a64c-486c-9e34-d55e85f26406'

In /etc/puppet/data/hiera_data/glance_backend/rbd.yaml:

glance::backend::rbd::rbd_store_user: 'admin'
glance::backend::rbd::rbd_store_ceph_conf: '/etc/ceph/ceph.conf'
glance::backend::rbd::rbd_store_pool: 'images'
glance::backend::rbd::rbd_store_chunk_size: '8'

You can add disks on OSD hosts by creating host override files in /etc/puppet/data/hiera_data/hostname/${short_hostname_of_your_data_node}.yaml like this:

# has_compute must be set for any server running nova compute
# nova uses the secret from virsh
cephdeploy::has_compute: true

# These are the disks for this particular host that you wish to use as OSDs.
# Specify disks here will DESTROY any data on this disk during the first puppet run.
  - sdb
  - sdc

Having trouble? You must follow these steps on a node that failed to setup OSDs properly

ceph-deploy uninstall $HOST
ceph-deploy purge $HOST
ceph-deploy purgedata $HOST
dd if=/dev/zero of=/dev/yourdisk count=200 bs=1M
(do the above to each OSD disk)
ceph-deploy disk zap host:disk
userdel -r cephdeploy

Once those steps are done, then run the puppet agent again.

Configuring Nova for Migrations

Migration enables an administrator to move a virtual machine instance from one compute host to another. This feature is useful when a compute host requires maintenance. Migration can also be useful to redistribute the load when many VM instances are running on a specific physical machine.

The migration types are:

Migration (or non-live migration). The instance is shut down (and the instance knows that it was rebooted) for a period of time to be moved to another hypervisor.

Live migration (or true live migration). Almost no instance downtime. Useful when the instances must be kept running during the migration.

The types of live migration are:

  • Shared storage-based live migration. Both hypervisors have access to shared storage.*
  • Block live migration. No shared storage is required.*

Volume-backed live migration. When instances are backed by volumes rather than ephemeral disk, no shared storage is required, and migration is supported (currently only in libvirt-based hypervisors).*

Currently, NFS migration and true migration are available with a few caveats.

Ceph volume-backed true live migration is still a WIP.

Configuring Keystone with SSL

To enable SSL in keystone,

  • Make sure you have the ssl certificates generated either self signed or signed with some authority CA.

If you want to generate your own certificates you can follow the instructions here.

  • To begin with, do your deployment without SSL
  • Copy your certificates on to control nodes under /etc/keystone/ssl. Make sure the files are owned by keystone user and group.
  • Then on build node, update the user.common.yaml with following settings
  • set the protocol to https
controller_public_protocol: 'https'
  • Enable ssl and update your certificate paths
enable_ssl: true
ssl_certfile: '/etc/keystone/ssl/certs/server.pem'

ssl_keyfile:  '/etc/keystone/ssl/private/serverkey.pem'
ssl_ca_certs: '/etc/keystone/ssl/certs/cacert.pem'
ssl_ca_key: '/etc/keystone/ssl/private/ca.key'
ssl_cert_subject: '/C=US/ST=Unset/L=Unset/O=Unset/CN='

  • On your control nodes rerun puppet agent. You should now have SSL enabled in keystone.
  • Note that this is still work in progress, So this might not fully function in HA scenarios.

Troubleshooting, Known Issues, and Common Questions

For Troubleshooting information, including answers to common questions and installation problems, visit the OpenStack Troubleshooting Page.

Rating: 0.0/5 (0 votes cast)

Personal tools