OpenStack: Icehouse Installer Overview

From DocWiki

Jump to: navigation, search

Contents

About the Cisco OpenStack Installer Project

Cisco OpenStack Installer (Cisco OSI) is Cisco's reference implementation of OpenStack, provided by Cisco free of charge and as open source software for the community. This guide documents the Cisco OSI Icehouse release.

Release Schedule and Policies

The release schedule of Cisco OSI parallels the community release. Where possible, Cisco OSI provides unmodified OpenStack code. Every new release of Cisco OSI follows the latest community stable release; however, in some cases Cisco might provide more recent patches that have been accepted into the OpenStack stable branches, but have not yet become part of an OpenStack stable release.

The Cisco OSI code update policy is to contribute code upstream to the OpenStack project and absorb patches into Cisco OpenStack Installer after they have been accepted upstream. Cisco deviates from this policy only when patches are unlikely to be reviewed and accepted upstream in time for a release or for a customer deadline (in such cases Cisco applies the patches to the repositories, submits them upstream, and replaces the local change with the upstream version when it is becomes accepted). Cisco also uses and contributes to modules from other upstream sources including Puppet Labs on StackForge.

Icehouse Release of Cisco OSI

The Icehouse release of Cisco OpenStack Installer continues the model-based approach introduced in the Havana release. Configuration parameters are edited in .yaml files and the configuration is applied using Puppet.

The Icehouse release contains several new features not present in the Havana release, including improved support for Modular Layer 2 (ML2) and using a new version of Ubuntu Linux (14.04 LTR, Trusty) as its base operating system. See the release notes for details of enhancements and bug fixes.

Puppet and Cobbler Installation Automation Tools

Cisco OSI uses Puppet to install and configure OpenStack components and Hiera to model the deployment configuration. Cisco's core OpenStack Puppet modules are the community OpenStack core modules available at the time of release on StackForge, where Cisco actively contributes code and reviews (see About the Cisco OpenStack Installer Project).

Cisco OpenStack Installer installs and sets up Cobbler (for more information, see Cobbler) in order to provide provisioning of physical servers in the OpenStack cloud.

OpenStack Components

Supported Components

Cisco OpenStack Installer uses standard OpenStack components for most functions.

Cisco OSI deploys Nova compute nodes. By default, it uses Neutron for network services. Cisco OSI also installs the Keystone identity service, the Horizon OpenStack console, and, optionally, Heat for orchestration. You can also use Ceilometer for telemetry.

For object storage, you can choose to have Cisco OSI deploy either Ceph or Swift; either can be used as a backend for Glance. Ceph can provide block storage and can serve as a backend for Cinder or as a standalone storage service.

In some deployment scenarios, you can choose to have Cisco OSI deploy your OpenStack cloud with active/active high availability for all core functions and other important components. When deploying the high availability reference architecture, Cisco OSI deploys additional components such as MySQL, WSREP and Galera, HAProxy, and Keepalived. You can also set up messaging with the Advanced Message Queuing Protocol (AMQP) using RabbitMQ Clustering and Mirrored Queues.

In order to provide a system that can be managed after it is installed, Cisco OSI provides open source monitoring tools as a reference monitoring system. These include Collectd and Graphite. Each tool provides simple health monitoring or trending information on the physical nodes and important software services in the OpenStack cloud.

Unsupported Components

OpenStack incubator projects that have not been added to the core products are not supported in Cisco OSI.

Component Versions

The package inventories for each release show the version of each component in the release.

The Cisco OSI is supplied in three repositories:

OpenStack Deployment

Deployment Scenarios

A scenario is a collection of roles that perform different tasks within an OpenStack cluster. For example, if you want an environment where a single node runs all services, the scenario "all_in_one" would be suitable. If you want separate control and compute nodes, the "2_role" scenario is the smallest possible scenario (in terms of number of distinct nodes). See Choosing a Deployment Scenario, on page 11 for descriptions of scenarios that are included with the Icehouse release.

Build Node

When you deploy OpenStack with Cisco OSI, you configure a build node (or build server) outside the OpenStack cluster to manage and automate the OpenStack software deployment. The build server is not a part of the OpenStack installation. After the build server is installed and configured, you use it as an out-of-band automation and management workstation to bring up, control, and reconfigure (if necessary) the nodes of the OpenStack cluster. The build node serves the following roles:

  • A Puppet Master server that deploys the software onto and manages the configuration of the OpenStack cluster. For more information about Puppet, see the Puppet Labs documentation.
  • A repository for a model of your installation scenario using the Scenario Node Terminus Puppet module and Puppet's Hiera tool to decouple node-specific data from your class descriptions.
  • A Cobbler installation server to manage the Pre-boot Execution Environment (PXE) boot that is used for rapid bootstrapping of the OpenStack cluster. For more information about Cobbler, see the Cobbler user documentation.
  • A monitoring server to collect statistics about the health and performance of the OpenStack cluster and to monitor the availability of the servers and services of the OpenStack cluster.
  • A cache for installing components to Puppet client nodes.

YAML Files and the Hiera Database

To set up and customize your Cisco OSI deployment, you will modify several YAML files in the Hiera database that controls the configuration of the deployment model. This section describes how those files define the model.

The deployment model is mostly defined in the following files:

  • /etc/puppet/hiera.yaml
    This file specifies the order in which files are read for configuration information. You can refer to this file to confirm that correct configuration parameters have been assigned and not preempted in the file hierarchy. You do not typically need to modify this file.
    Note: The :yaml: section identifies the root directory for the configuration files.
    The :hierarchy: section orders the files that are searched for configuration assignments. The files listed are searched first-to-last; the first occurrence found is used. The first several entries should resemble this example:
:hierarchy:
  - "hostname/%{hostname}"
  - "client/%{clientcert}"
  - user
  - jenkins
  - vendor/cisco_coi_user.%{scenario}
  - user.%{scenario}
  - vendor/cisco_coi_user.common
  - user.common
  ...
  • /etc/puppet/data/hiera_data/*.yaml
    These files contain the model information. They are searched in the order described in the previous bullet. You configure your model by adding and modifying information in these files. Most of the changes you make will be in one of two files: user.common.yaml and user.scenario.yaml, where "scenario" is the deployment scenario that you are using.
  • /etc/puppet/data/role_mappings.yaml
    This file defines roles for each node in your deployment. It is populated with default values for the supplied scenarios. You will need to modify this file to specify which role is used by each node in your deployment.

Facter Variables

Puppet uses the Facter tool to collect system information into a set of variables called Puppet facts. These variables have the form %{fact_name}. For example, a configuration might capture the eth1 IP address in the variable %{ipaddress_eth1}. See Learning Puppet—Variables, Conditionals, and Facts.

Cobbler Configuration

Cobbler is used to provision baremetal nodes. Each node that you want to provision using Cobbler must be individually defined in /etc/puppet/data/cobbler/cobbler.yaml. Provisioning using Cobbler is optional in Cisco OSI. You can instead provision a machine yourself and deploy an OpenStack node by installing a Puppet Agent. See Building the Control and Compute Nodes Individually With Puppet Agent.

IP Networks

Cisco OSI configures one or more of the following interface types depending on how you configure the deployment:

  • A public interface that is reachable by all other OpenStack nodes—Used for API access, Horizon/VNC console access, and as a (GRE) tunnel endpoint. You can also optionally use the public interface for management tasks such as monitoring and Preboot Execution Environment (PXE) booting the baremetal node.
  • (Optional) A separate management interface—Used for management tasks if you do not want to put the tasks on the public interface.
  • An external interface attached to an upstream device that provides Layer 3 routing—Used to provide an uplink for the Layer 3 agent's external router (in a GRE-based tenant network) and by floating IP addresses (in a provider mode network).
  • Private interfaces—Used for traffic between tenants or virtual machines (VMs).

Cisco OSI supports the installation of two different models for traffic between VMs and external networks.

The first model is a provider network (also called a VLAN model in Cisco documentation). In this model, each VM connects through a bridge to the compute node's external-facing adapter. Each VM thus has an IP address associated with the underlying physical network, so that VMs are connected directly to the network that routes the external traffic.

The second model is the tenant network or GRE model. In this model, VMs have direct access only to an internal virtual network. The Neutron controller provides external access by acting as a Network Address Translation (NAT) router, providing Port Address Translation (PAT) for outbound traffic and Floating IP addresses for inbound traffic.

Figures 1 and 2 illustrate the provider and GRE network topologies, respectively, for one controller and one compute node.

Figure 1: Provider Network Example Provider Network


Figure 2: GRE Network Example Provider Network

~=~

Next: Installer Prereqs ->

Table of Contents ~=~ Overview: About ~ Components ~ Deployment ~ IP Networks ~=~ Prerequisites: System Requirements ~ Choosing a Deployment Scenario ~=~ Installing: Creating the Build Server ~ Building the Control and Compute Nodes ~=~ Testing: Verifying Nodes ~ Using the Monitoring Interface ~ Creating a Network ~ Creating a Tenant Instance

Rating: 4.0/5 (1 vote cast)

Personal tools