Openstack:Havana-Openstack-Installer

From DocWiki

Revision as of 17:49, 13 December 2013 by Chricker (Talk | contribs)
Jump to: navigation, search

Please note: this is pre-release documentation. The Havana version of Cisco OpenStack Installer will be released approximately on 12/20/2013. You are welcome to use this documentation as a preview and to take our pre-release software for a spin in the meantime.

Contents

Cisco Openstack Installer

The Havana release represents a significant departure from previous Cisco Openstack Installers. Where in the past users were required to edit the 'site.pp', a puppet manifest that describes their environment, this has now been changed to a yaml based data model. This provides significantly more flexibility, as a user can now modify almost any aspect of their Openstack install without needing to know any puppet.

These instructions require git to be installed. If you are on RedHat, 'yum install git', and if you are on Ubuntu, 'apt-get install git'

Build server

First, become root, then clone the Cisco openstack installer repository into /root:

sudo su - 
cd /root && git clone -b coi-development https://github.com/CiscoSystems/puppet_openstack_builder

And select cisco as the vendor:

export vendor=cisco

A 'scenario' is a collection of roles that perform different tasks within an Openstack cluster. For example, if you wanted an environment where some nodes run all services, the scenario 'all_in_one' would be suitable, or if you wanted to separate control and compute nodes, the '2_role' scenario can do that. Here are the currently available scenarios and their roles:

- Scenario: all_in_one
- Roles: all_in_one, compute
- This scenario has a node with puppet services + control services + compute services,
  and optionally additional nodes with just compute services
- Scenario: 2_role
- Roles: build, control, compute, swift_proxy, swift_storage
- This scenario separates the puppet, control, compute, and swift services into separate nodes
- Scenario: full_ha
- Roles: build, control, compute, swift_proxy, swift_storage, load_balancer
- This scenario is similar to the 2_role one, but includes a load balancer role and is used for HA deployment

To select a scenario, export the appropriate environment variable like this:

export scenario=2_role

Now run a script that will prepare puppet, modules and repositories on the build node:

cd puppet_openstack_builder/install-scripts
./install.sh

Configuration and Customisation

At this point, the data model has been installed and any required customizations should be made to the data model. If you selected the all_in_one scenario, some sample data has already been placed in
/etc/puppet/data/hiera_data/user.yaml
that you should review for correctness. If the file is not there, go ahead and create it, and put in any config options that are required. Here are some of the common ones:
vi /etc/puppet/data/hiera_data/user.yaml

Required Config

# Set the hostname of the build node
coe::base::build_node_name: build-server

# Set the hostname of the control node
coe::base::controller_hostname: control-server

# Set the IP address of the control node
coe::base::controller_node_internal: 192.168.100.20

Software Repository Config

# Configure which repository to get packages from. Can be 'cisco_repo' or 'cloud_archive'
# The former is an apt repo maintained by Cisco and the latter is the Ubuntu Cloud Archive
coe::base::package_repo: 'cisco_repo'

# Which Openstack release to install - can be 'grizzly' or 'havana'. Other versions currently untested.
coe::base::openstack_release: 'havana'

# If cisco_repo is used, the mirror location can be configured as either ftp or http:
coe::base::openstack_repo_location: 'ftp://ftpeng.cisco.com/openstack'
coe::base::openstack_repo_location: 'http://openstack-repo.cisco.com/openstack'

# Cisco maintains a supplemental repo with packages that aren't core to Openstack, but
# are frequently used, such as ceph and mysql-galera. It can be enabled using ftp or http
coe::base::supplemental_repo: 'ftp://ftpeng.cisco.com/openstack/cisco_supplemental'
coe::base::supplemental_repo: 'http://openstack-repo.cisco.com/openstack/cisco_supplemental'

# If you are using the ubuntu repo, you can change from 'main' (default) to 'updates'
coe::base::ubuntu_repo: 'updates'

# Set a proxy server
coe::base::proxy: '192.168.100.100'

# Set a gateway server
coe::base::node_gateway: '192.168.100.101'

Connectivity Config

# The DNS Domain
domain: domain.name

# A list of NTP servers
ntp_servers: 
  - time-server.domain.name

# Used to tell Neutron agents which IP to bind to.
# We can get the ip address of a particular interface
# using a fact.
internal_ip: "%{ipaddress_eth1}"

# Similarly, VNC needs to be told the IP to bind to
nova::compute::vncserver_proxyclient_address: "%{ipaddress_eth1}"

# This interface will be used to NAT to the outside world. It
# only needs to be on the control node(s)
external_interface: eth2

# This interface is used for communication between openstack
# components, such as the database and message queue
public_interface: eth1

# This interface is used for VM network traffic
private_interface: eth1

# This will be used to tell openstack services where the DB and
# other internally consumed services are
controller_internal_address: 192.168.100.10

# This is used in the HA scenario to set the public keystone
# endpoint
controller_public_address: 192.168.100.10

Passwords

# Users can set either a single password for all services, plus
# the secret token for keystone
secret_key: secret
password: password123

# Or passwords can be specified for each service
cinder_db_password: cinder_pass
glance_db_password: glance_pass
keystone_db_password: key_pass
nova_db_password: nova_pass
network_db_password:   quantum_pass
database_root_password: mysql_pass
cinder_service_password: cinder_pass
glance_service_password: glance_pass
nova_service_password: nova_pass
ceilometer_service_password: ceilometer_pass
admin_password: Cisco123
admin_token: keystone_admin_token
network_service_password: quantum_pass
rpc_password: openstack_rabbit_password
metadata_shared_secret: metadata_shared_secret
horizon_secret_key: horizon_secret_key
ceilometer_metering_secret: ceilometer_metering_secret
ceilometer_db_password: ceilometer
heat_db_password: heat
heat_service_password: heat_pass

Nodes

You will also need to map the roles in your selected scenario to hostnames. This is done in /etc/puppet/data/role_mappings.yaml like so:

vi /etc/puppet/data/role_mappings.yaml
control-server: controller
control-server01: controller
control-server02: controller
control-server03: controller

compute-server: compute
compute-server01: compute
compute-server02: compute
compute-server03: compute

all-in-one: all_in_one

build-server: build

load-balancer01: load_balancer
load-balancer02: load_balancer

swift-proxy01: swift_proxy
swift-proxy02: swift_proxy

swift-storage01: swift_storage
swift-storage02: swift_storage
swift-storage03: swift_storage

Now run the master script, to turn the build node into a puppet master:

sh master.sh

Other Nodes

After setting up the build node, all other nodes can be deployed using these commands:

export build_server_ip=<YOUR_BUILD_NODE_IP>
bash <(curl \-fsS https://raw.github.com/stackforge/puppet\_openstack\_builder/master/install-scripts/setup.sh)
puppet agent -td --server=build-server.domain.name


Adding Ceph to nodes

Adding Ceph services to a node is trivial. The core ceph configuration data is in /etc/puppet/data/hiera_data/user.common.yaml Here you can configure the particulars of your cluster. Most items can stay as their defaults. You will likely need to modify all the networking options.

Once you've modified this file, you will need to create a hostname override file for your target server. This is stored in /etc/puppet/hiera_data/hostname/, with ceph01.yaml as an example. Here, you specify the disks to use as OSDs on the target node.

There are three Ceph classgroups: ceph_mon, ceph_osd, and ceph_all. ceph_all just aggregates ceph_mon. You can add ceph services to a particular role by adding these to the role configuration file. Eg if you want all compute nodes to offer OSD, you can add the line ceph_osd to the compute.yaml file in /etc/puppet/data/classgroups. To add services only to specific servers, model the data in ceph_mon or ceph_osd into your hostname override file.

Once this is complete, the next puppet run on the target servers will bring your cluster up and online.

To configure Cinder and Glance to use Ceph for storage, you will also need to configure the following. This can be done independently of the cluster deployment process:

In /etc/puppet/data/global_hiera_params/common.yaml:

cinder_backend: rbd
glance_backend: rbd

In /etc/puppet/data/hiera_data/cinder_backend/rbd.yaml:

cinder::volume::rbd::rbd_pool: 'volumes'
cinder::volume::rbd::glance_api_version: '2'
cinder::volume::rbd::rbd_user: 'admin'
# keep this the same as your ceph_monitor_fsid
cinder::volume::rbd::rbd_secret_uuid: 'e80afa94-a64c-486c-9e34-d55e85f26406'

In /etc/puppet/data/hiera_data/glance_backend/rbd.yaml:

glance::backend::rbd::rbd_store_user: 'admin'
glance::backend::rbd::rbd_store_ceph_conf: '/etc/ceph/ceph.conf'
glance::backend::rbd::rbd_store_pool: 'images'
glance::backend::rbd::rbd_store_chunk_size: '8'

Rating: 3.7/5 (21 votes cast)

Personal tools